WorldWideScience

Sample records for models large group

  1. Two-group modeling of interfacial area transport in large diameter channels

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, J.P., E-mail: schlegelj@mst.edu [Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, 301 W 14th St., Rolla, MO 65409 (United States); Hibiki, T.; Ishii, M. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907 (United States)

    2015-11-15

    Highlights: • Implemented updated constitutive models and benchmarking method for IATE in large pipes. • New model and method with new data improved the overall IATE prediction for large pipes. • Not all conditions well predicted shows that further development is still required. - Abstract: A comparison of the existing two-group interfacial area transport equation source and sink terms for large diameter channels with recently collected interfacial area concentration measurements (Schlegel et al., 2012, 2014. Int. J. Heat Fluid Flow 47, 42) has indicated that the model does not perform well in predicting interfacial area transport outside of the range of flow conditions used in the original benchmarking effort. In order to reduce the error in the prediction of interfacial area concentration by the interfacial area transport equation, several constitutive relations have been updated including the turbulence model and relative velocity correlation. The transport equation utilizing these updated models has been modified by updating the inter-group transfer and Group 2 coalescence and disintegration kernels using an expanded range of experimental conditions extending to pipe sizes of 0.304 m [12 in.], gas velocities of up to nearly 11 m/s [36.1 ft/s] and liquid velocities of up to 2 m/s [6.56 ft/s], as well as conditions with both bubbly flow and cap-bubbly flow injection (Schlegel et al., 2012, 2014). The modifications to the transport equation have resulted in a decrease in the RMS error for void fraction and interfacial area concentration from 17.32% to 12.3% and 21.26% to 19.6%. The combined RMS error, for both void fraction and interfacial area concentration, is below 15% for most of the experiments used in the comparison, a distinct improvement over the previous version of the model.

  2. The subjective experience of the self in the large group: two models for study.

    Science.gov (United States)

    Shields, W

    2001-04-01

    More and more opportunities now exist for group therapists to engage in the study of the self in the large group at local, national, and international conferences as well as in clinical and other organizational settings. This may be particularly important for the group therapist in the next century with potential benefit not only for individuals but also for groups and social systems of all kinds. In this article, I review my own subjective experiences in the large group context and in large study group experiences. Then, I contrast the group analytic and the group relations approaches to the large group with particular reference to Winnicott's theory about maturational processes in a facilitating environment.

  3. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    Science.gov (United States)

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  4. A model for the use of blended learning in large group teaching sessions.

    Science.gov (United States)

    Herbert, Cristan; Velan, Gary M; Pryor, Wendy M; Kumar, Rakesh K

    2017-11-09

    appreciated for their flexibility, which enabled students to work at their own pace. In transforming this introductory Pathology course, we have demonstrated a model for the use of blended learning in large group teaching sessions, which achieved high levels of completion, satisfaction and value for learning.

  5. A model for the use of blended learning in large group teaching sessions

    Directory of Open Access Journals (Sweden)

    Cristan Herbert

    2017-11-01

    modules were described as enjoyable, motivating and were appreciated for their flexibility, which enabled students to work at their own pace. Conclusions In transforming this introductory Pathology course, we have demonstrated a model for the use of blended learning in large group teaching sessions, which achieved high levels of completion, satisfaction and value for learning.

  6. Quantitative Modeling of Membrane Transport and Anisogamy by Small Groups Within a Large-Enrollment Organismal Biology Course

    Directory of Open Access Journals (Sweden)

    Eric S. Haag

    2016-12-01

    Full Text Available Quantitative modeling is not a standard part of undergraduate biology education, yet is routine in the physical sciences. Because of the obvious biophysical aspects, classes in anatomy and physiology offer an opportunity to introduce modeling approaches to the introductory curriculum. Here, we describe two in-class exercises for small groups working within a large-enrollment introductory course in organismal biology. Both build and derive biological insights from quantitative models, implemented using spreadsheets. One exercise models the evolution of anisogamy (i.e., small sperm and large eggs from an initial state of isogamy. Groups of four students work on Excel spreadsheets (from one to four laptops per group. The other exercise uses an online simulator to generate data related to membrane transport of a solute, and a cloud-based spreadsheet to analyze them. We provide tips for implementing these exercises gleaned from two years of experience.

  7. Effects of core models and neutron energy group structures on xenon oscillation in large graphite-moderated reactors

    International Nuclear Information System (INIS)

    Yamasita, Kiyonobu; Harada, Hiroo; Murata, Isao; Shindo, Ryuichi; Tsuruoka, Takuya.

    1993-01-01

    Xenon oscillations of large graphite-moderated reactors have been analyzed by a multi-group diffusion code with two- and three-dimensional core models to study the effects of the geometric core models and the neutron energy group structures on the evaluation of the Xe oscillation behavior. The study clarified the following. It is important for accurate Xe oscillation simulations to use the neutron energy group structure that describes well the large change in the absorption cross section of Xe in the thermal energy range of 0.1∼0.65 eV, because the energy structure in this energy range has significant influences on the amplitude and the period of oscillations in power distributions. Two-dimensional R-Z models can be used instead of three-dimensional R-θ-Z models for evaluation of the threshold power of Xe oscillation, but two-dimensional R-θ models cannot be used for evaluation of the threshold power. Although the threshold power evaluated with the R-θ-Z models coincides with that of the R-Z models, it does not coincide with that of the R-θ models. (author)

  8. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  9. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    Directory of Open Access Journals (Sweden)

    Francesco Gargano

    Full Text Available The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  10. Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  11. LLNL Chemical Kinetics Modeling Group

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Westbrook, C K; Mehl, M; Herbinet, O; Curran, H J; Silke, E J

    2008-09-24

    The LLNL chemical kinetics modeling group has been responsible for much progress in the development of chemical kinetic models for practical fuels. The group began its work in the early 1970s, developing chemical kinetic models for methane, ethane, ethanol and halogenated inhibitors. Most recently, it has been developing chemical kinetic models for large n-alkanes, cycloalkanes, hexenes, and large methyl esters. These component models are needed to represent gasoline, diesel, jet, and oil-sand-derived fuels.

  12. LARGE AND SMALL GROUP TYPEWRITING PROJECT.

    Science.gov (United States)

    JEFFS, GEORGE A.; AND OTHERS

    AN INVESTIGATION WAS CONDUCTED TO DETERMINE IF GROUPS OF HIGH SCHOOL STUDENTS NUMERICALLY IN EXCESS OF 50 COULD BE AS EFFECTIVELY INSTRUCTED IN TYPEWRITING SKILLS AS GROUPS OF LESS THAN 30. STUDENTS ENROLLED IN 1ST-YEAR TYPEWRITING WERE RANDOMLY ASSIGNED TO TWO LARGE GROUPS AND THREE SMALL GROUPS TAUGHT BY THE SAME INSTRUCTOR. TEACHER-MADE,…

  13. Secure Group Communications for Large Dynamic Multicast Group

    Institute of Scientific and Technical Information of China (English)

    Liu Jing; Zhou Mingtian

    2003-01-01

    As the major problem in multicast security, the group key management has been the focus of research But few results are satisfactory. In this paper, the problems of group key management and access control for large dynamic multicast group have been researched and a solution based on SubGroup Secure Controllers (SGSCs) is presented, which solves many problems in IOLUS system and WGL scheme.

  14. The large-Nc renormalization group

    International Nuclear Information System (INIS)

    Dorey, N.

    1995-01-01

    In this talk, we review how effective theories of mesons and baryons become exactly soluble in the large-N c , limit. We start with a generic hadron Lagrangian constrained only by certain well-known large-N c , selection rules. The bare vertices of the theory are dressed by an infinite class of UV divergent Feynman diagrams at leading order in 1/N c . We show how all these leading-order dia, grams can be summed exactly using semiclassical techniques. The saddle-point field configuration is reminiscent of the chiral bag: hedgehog pions outside a sphere of radius Λ -1 (Λ being the UV cutoff of the effective theory) matched onto nucleon degrees of freedom for r ≤ Λ -1 . The effect of this pion cloud is to renormalize the bare nucleon mass, nucleon-Δ hyperfine mass splitting, and Yukawa couplings of the theory. The corresponding large-N c , renormalization group equations for these parameters are presented, and solved explicitly in a series of simple models. We explain under what conditions the Skyrmion emerges as a UV fixed-point of the RG flow as Λ → ∞

  15. The zero-dimensional O(N) vector model as a benchmark for perturbation theory, the large-N expansion and the functional renormalization group

    International Nuclear Information System (INIS)

    Keitel, Jan; Bartosch, Lorenz

    2012-01-01

    We consider the zero-dimensional O(N) vector model as a simple example to calculate n-point correlation functions using perturbation theory, the large-N expansion and the functional renormalization group (FRG). Comparing our findings with exact results, we show that perturbation theory breaks down for moderate interactions for all N, as one should expect. While the interaction-induced shift of the free energy and the self-energy are well described by the large-N expansion even for small N, this is not the case for higher order correlation functions. However, using the FRG in its one-particle irreducible formalism, we see that very few running couplings suffice to get accurate results for arbitrary N in the strong coupling regime, outperforming the large-N expansion for small N. We further remark on how the derivative expansion, a well-known approximation strategy for the FRG, reduces to an exact method for the zero-dimensional O(N) vector model. (paper)

  16. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  17. Student decision making in large group discussion

    Science.gov (United States)

    Kustusch, Mary Bridget; Ptak, Corey; Sayre, Eleanor C.; Franklin, Scott V.

    2015-04-01

    It is increasingly common in physics classes for students to work together to solve problems and perform laboratory experiments. When students work together, they need to negotiate the roles and decision making within the group. We examine how a large group of students negotiates authority as part of their two week summer College Readiness Program at Rochester Institute of Technology. The program is designed to develop metacognitive skills in first generation and Deaf and hard-of-hearing (DHH) STEM undergraduates through cooperative group work, laboratory experimentation, and explicit reflection exercises. On the first full day of the program, the students collaboratively developed a sign for the word ``metacognition'' for which there is not a sign in American Sign Language. This presentation will focus on three aspects of the ensuing discussion: (1) how the instructor communicated expectations about decision making; (2) how the instructor promoted student-driven decision making rather than instructor-driven policy; and (3) one student's shifts in decision making behavior. We conclude by discussing implications of this research for activity-based physics instruction.

  18. Group Capability Model

    Science.gov (United States)

    Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen

    2009-01-01

    The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common

  19. Structured approaches to large-scale systems: Variational integrators for interconnected Lagrange-Dirac systems and structured model reduction on Lie groups

    Science.gov (United States)

    Parks, Helen Frances

    This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work

  20. Theory and modeling group

    Science.gov (United States)

    Holman, Gordon D.

    1989-01-01

    The primary purpose of the Theory and Modeling Group meeting was to identify scientists engaged or interested in theoretical work pertinent to the Max '91 program, and to encourage theorists to pursue modeling which is directly relevant to data which can be expected to result from the program. A list of participants and their institutions is presented. Two solar flare paradigms were discussed during the meeting -- the importance of magnetic reconnection in flares and the applicability of numerical simulation results to solar flare studies.

  1. Report of the large solenoid detector group

    International Nuclear Information System (INIS)

    Hanson, G.G.; Mori, S.; Pondrom, L.G.

    1987-09-01

    This report presents a conceptual design of a large solenoid for studying physics at the SSC. The parameters and nature of the detector have been chosen based on present estimates of what is required to allow the study of heavy quarks, supersymmetry, heavy Higgs particles, WW scattering at large invariant masses, new W and Z bosons, and very large momentum transfer parton-parton scattering. Simply stated, the goal is to obtain optimum detection and identification of electrons, muons, neutrinos, jets, W's and Z's over a large rapidity region. The primary region of interest extends over +-3 units of rapidity, although the calorimetry must extend to +-5.5 units if optimal missing energy resolution is to be obtained. A magnetic field was incorporated because of the importance of identifying the signs of the charges for both electrons and muons and because of the added possibility of identifying tau leptons and secondary vertices. In addition, the existence of a magnetic field may prove useful for studying new physics processes about which we currently have no knowledge. Since hermeticity of the calorimetry is extremely important, the entire central and endcap calorimeters were located inside the solenoid. This does not at the moment seem to produce significant problems (although many issues remain to be resolved) and in fact leads to a very effective muon detector in the central region

  2. Longitudinal Trajectories of Metabolic Control From Childhood to Young Adulthood in Type 1 Diabetes From a Large German/Austrian Registry: A Group-Based Modeling Approach.

    Science.gov (United States)

    Schwandt, Anke; Hermann, Julia M; Rosenbauer, Joachim; Boettcher, Claudia; Dunstheimer, Désirée; Grulich-Henn, Jürgen; Kuss, Oliver; Rami-Merhar, Birgit; Vogel, Christian; Holl, Reinhard W

    2017-03-01

    Worsening of glycemic control in type 1 diabetes during puberty is a common observation. However, HbA 1c remains stable or even improves for some youths. The aim is to identify distinct patterns of glycemic control in type 1 diabetes from childhood to young adulthood. A total of 6,433 patients with type 1 diabetes were selected from the prospective, multicenter diabetes patient registry Diabetes-Patienten-Verlaufsdokumentation (DPV) (follow-up from age 8 to 19 years, baseline diabetes duration ≥2 years, HbA 1c aggregated per year of life). We used latent class growth modeling as the trajectory approach to determine distinct subgroups following a similar trajectory for HbA 1c over time. Five distinct longitudinal trajectories of HbA 1c were determined, comprising group 1 = 40%, group 2 = 27%, group 3 = 15%, group 4 = 13%, and group 5 = 5% of patients. Groups 1-3 indicated stable glycemic control at different HbA 1c levels. At baseline, similar HbA 1c was observed in group 1 and group 4, but HbA 1c deteriorated in group 4 from age 8 to 19 years. Similar patterns were present in group 3 and group 5. We observed differences in self-monitoring of blood glucose, insulin therapy, daily insulin dose, physical activity, BMI SD score, body-height SD score, and migration background across all HbA 1c trajectories (all P ≤ 0.001). No sex differences were present. Comparing groups with similar initial HbA 1c but different patterns, groups with higher HbA 1c increase were characterized by lower frequency of self-monitoring of blood glucose and physical activity and reduced height (all P demographics were related to different HbA 1c courses. © 2017 by the American Diabetes Association.

  3. Mining Behavioral Groups in Large Wireless LANs

    OpenAIRE

    Hsu, Wei-jen; Dutta, Debojyoti; Helmy, Ahmed

    2006-01-01

    One vision of future wireless networks is that they will be deeply integrated and embedded in our lives and will involve the use of personalized mobile devices. User behavior in such networks is bound to affect the network performance. It is imperative to study and characterize the fundamental structure of wireless user behavior in order to model, manage, leverage and design efficient mobile networks. It is also important to make such study as realistic as possible, based on extensive measure...

  4. Large neutrino mixing from renormalization group evolution

    International Nuclear Information System (INIS)

    Balaji, K.R.S.; Mohapatra, R.N.; Parida, M.K.; Paschos, E.A.

    2000-10-01

    The renormalization group evolution equation for two neutrino mixing is known to exhibit nontrivial fixed point structure corresponding to maximal mixing at the weak scale. The presence of the fixed point provides a natural explanation of the observed maximal mixing of ν μ - ν τ , if the ν μ and ν τ are assumed to be quasi-degenerate at the seesaw scale without constraining the mixing angles at that scale. In particular, it allows them to be similar to the quark mixings as in generic grand unified theories. We discuss implementation of this program in the case of MSSM and find that the predicted mixing remains stable and close to its maximal value, for all energies below the O(TeV) SUSY scale. We also discuss how a particular realization of this idea can be tested in neutrinoless double beta decay experiments. (author)

  5. Modelling group dynamic animal movement

    DEFF Research Database (Denmark)

    Langrock, Roland; Hopcraft, J. Grant C.; Blackwell, Paul G.

    2014-01-01

    makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias......, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual......Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However...

  6. GRIP LANGLEY AEROSOL RESEARCH GROUP EXPERIMENT (LARGE) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Langley Aerosol Research Group Experiment (LARGE) measures ultrafine aerosol number density, total and non-volatile aerosol number density, dry aerosol size...

  7. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  8. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  9. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  10. Will Large DSO-Managed Group Practices Be the Predominant Setting for Oral Health Care by 2025? Two Viewpoints: Viewpoint 1: Large DSO-Managed Group Practices Will Be the Setting in Which the Majority of Oral Health Care Is Delivered by 2025 and Viewpoint 2: Increases in DSO-Managed Group Practices Will Be Offset by Models Allowing Dentists to Retain the Independence and Freedom of a Traditional Practice.

    Science.gov (United States)

    Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L

    2015-05-01

    This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.

  11. The EU model evaluation group

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1999-01-01

    The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  12. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  13. The effect of continuous grouping of pigs in large groups on stress response and haematological parameters

    DEFF Research Database (Denmark)

    Damgaard, Birthe Marie; Studnitz, Merete; Jensen, Karin Hjelholt

    2009-01-01

    The consequences of an ‘all in-all out' static group of uniform age vs. a continuously dynamic group with litter introduction and exit every third week were examined with respect to stress response and haematological parameters in large groups of 60 pigs. The experiment included a total of 480 pigs...... from weaning at the age of 4 weeks to the age of 18 weeks after weaning. Limited differences were found in stress and haematological parameters between pigs in dynamic and static groups. The cortisol response to the stress test was increasing with the duration of the stress test in pigs from...... the dynamic group while it was decreasing in the static group. The health condition and the growth performance were reduced in the dynamic groups compared with the static groups. In the dynamic groups the haematological parameters indicated an activation of the immune system characterised by an increased...

  14. Five Large Generation Groups:Competing in Capital Operation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ Since the reform of electric power industry in 2002,the newly established five large generation groups have been persisting in the development strategy of "taking electricity as the core and extending to up-and-downstream businesses." Stringent measures were taken in capital operation and their potential has been shown through electric power assets acquiring,coal and financial resources investing,capital market financing as well as power utility restructuring.The five groups are playing more and more important roles in merger and acquisition (M&A) and capital markets.

  15. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  16. Integrating Collaborative Learning Groups in the Large Enrollment Lecture

    Science.gov (United States)

    Adams, J. P.; Brissenden, G.; Lindell Adrian, R.; Slater, T. F.

    1998-12-01

    Recent reforms for undergraduate education propose that students should work in teams to solve problems that simulate problems that research scientists address. In the context of an innovative large-enrollment course at Montana State University, faculty have developed a series of 15 in-class, collaborative learning group activities that provide students with realistic scenarios to investigate. Focusing on a team approach, the four principle types of activities employed are historical, conceptual, process, and open-ended activities. Examples of these activities include classifying stellar spectra, characterizing galaxies, parallax measurements, estimating stellar radii, and correlating star colors with absolute magnitudes. Summative evaluation results from a combination of attitude surveys, astronomy concept examinations, and focus group interviews strongly suggest that, overall, students are learning more astronomy, believe that the group activities are valuable, enjoy the less-lecture course format, and have significantly higher attendance rates. In addition, class observations of 48 self-formed, collaborative learning groups reveal that female students are more engaged in single-gender learning groups than in mixed gender groups.

  17. Memory Efficient PCA Methods for Large Group ICA.

    Science.gov (United States)

    Rachakonda, Srinivas; Silva, Rogers F; Liu, Jingyu; Calhoun, Vince D

    2016-01-01

    Principal component analysis (PCA) is widely used for data reduction in group independent component analysis (ICA) of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT). The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads), accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4 GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations ideal for big

  18. Memory efficient PCA methods for large group ICA

    Directory of Open Access Journals (Sweden)

    Srinivas eRachakonda

    2016-02-01

    Full Text Available Principal component analysis (PCA is widely used for data reduction in group independent component analysis (ICA of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT. The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads, accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations

  19. Two-group interfacial area concentration correlations of two-phase flows in large diameter pipes

    International Nuclear Information System (INIS)

    Shen, Xiuzhong; Hibiki, Takashi

    2015-01-01

    The reliable empirical correlations and models are one of the important ways to predict the interfacial area concentration (IAC) in two-phase flows. However, up to now, no correlation or model is available for the prediction of the IAC in the two-phase flows in large diameter pipes. This study collected an IAC experimental database of two-phase flows taken under various flow conditions in large diameter pipes and presented a systematic way to predict the IAC for two-phase flows from bubbly, cap-bubbly to churn flow in large diameter pipes by categorizing bubbles into two groups (group-1: spherical and distorted bubble, group-2: cap bubble). Correlations were developed to predict the group-1 void fraction from the void fraction of all bubble. The IAC contribution from group-1 bubbles was modeled by using the dominant parameters of group-1 bubble void fraction and Reynolds number based on the parameter-dependent analysis of Hibiki and Ishii (2001, 2002) using one-dimensional bubble number density and interfacial area transport equations. A new drift velocity correlation for two-phase flow with large cap bubbles in large diameter pipes was derived in this study. By comparing the newly-derived drift velocity correlation with the existing drift velocity correlation of Kataoka and Ishii (1987) for large diameter pipes and using the characteristics of the representative bubbles among the group 2 bubbles, we developed the model of IAC and bubble size for group 2 cap bubbles. The developed models for estimating the IAC are compared with the entire collected database. A reasonable agreement was obtained with average relative errors of ±28.1%, ±54.4% and ±29.6% for group 1, group 2 and all bubbles respectively. (author)

  20. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  1. On renormalization group flow in matrix model

    International Nuclear Information System (INIS)

    Gao, H.B.

    1992-10-01

    The renormalization group flow recently found by Brezin and Zinn-Justin by integrating out redundant entries of the (N+1)x(N+1) Hermitian random matrix is studied. By introducing explicitly the RG flow parameter, and adding suitable counter terms to the matrix potential of the one matrix model, we deduce some interesting properties of the RG trajectories. In particular, the string equation for the general massive model interpolating between the UV and IR fixed points turns out to be a consequence of RG flow. An ambiguity in the UV region of the RG trajectory is remarked to be related to the large order behaviour of the one matrix model. (author). 7 refs

  2. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  3. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  5. Integrable lattice models and quantum groups

    International Nuclear Information System (INIS)

    Saleur, H.; Zuber, J.B.

    1990-01-01

    These lectures aim at introducing some basic algebraic concepts on lattice integrable models, in particular quantum groups, and to discuss some connections with knot theory and conformal field theories. The list of contents is: Vertex models and Yang-Baxter equation; Quantum sl(2) algebra and the Yang-Baxter equation; U q sl(2) as a symmetry of statistical mechanical models; Face models; Face models attached to graphs; Yang-Baxter equation, braid group and link polynomials

  6. Large-scale Lurgi plant would be uneconomic: study group

    Energy Technology Data Exchange (ETDEWEB)

    1964-03-21

    Gas Council and National Coal Board agreed that building of large scale Lurgi plant on the basis of study is not at present acceptable on economic grounds. The committee considered that new processes based on naphtha offered more economic sources of base and peak load production. Tables listing data provided in contractors' design studies and summary of contractors' process designs are included.

  7. Conjugacy in relatively extra-large Artin groups

    Directory of Open Access Journals (Sweden)

    Arye Juhasz

    2015-09-01

    Full Text Available Let A be an Artin group with standard generators X={x 1 ,…,x n } , n≥1 and defining graph Γ A . A \\emph{standard parabolic subgroup} of A is a subgroup generated by a subset of X . For elements u and v of A we say (as usual that u is conjugate to v by an element h of A if h −1 uh=v holds in A . Similarly, if K and L are subsets of A then K is conjugate to L by an element h of A if h −1 Kh=L . In this work we consider the conjugacy of elements and standard parabolic subgroups of a certain type of Artin groups. Results in this direction occur in occur in papers by Duncan, Kazachkov, Remeslennikov, Fenn, Dale, Jun, Godelle, Gonzalez-Meneses, Wiest, Paris, Rolfsen, for example. Of particular interest are centralisers of elements, and of standard parabolic subgroups, normalisers of standard parabolic subgroups and commensurators of parabolic subgroups. In this work we consider similar problems in a new class of Artin groups, introduced in the paper "On relatively extralarge Artin groups and their relative asphericity", by Juhasz, where the word problem is solved, among other things. Also, intersections of parabolic subgroups and their conjugates are considered.

  8. Characteristic properties of large subgroups in primary abelian groups

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    1. Introduction. The main purpose of this article is to study the relations between the structures of primary abelian groups and their ..... Case 2. γ − 2 exists. Let Gγ −1 be a direct summand of Gγ . We remark, in connection with Case 1, that any pγ −1. -high subgroup of Gγ is isomorphic to Gγ −1. As far as Case 2 is concerned, ...

  9. 16-dimensional smooth projective planes with large collineation groups

    OpenAIRE

    Bödi, Richard

    1998-01-01

    Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch) Smooth projective planes are projective planes defined on smooth manifolds (i.e. the set of points and the set of lines are smooth manifolds) such that the geometric operations of join and intersection are smooth. A systematic study of such planes and of their collineation groups can be found in previous works of the author. We prove in this paper that a 16-dimensional smooth projective plane which admits a ...

  10. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  11. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  12. Working group report: Physics at the Large Hadron Collider

    Indian Academy of Sciences (India)

    cally viable physics issues at two hadron colliders currently under operation, the p¯p collider ... corrections to different SM processes are very important. ... Keeping all these in mind and the available skills and interests of the ... relation involving the masses of the Standard Model particles as well as the masses of any.

  13. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  14. Sutherland models for complex reflection groups

    International Nuclear Information System (INIS)

    Crampe, N.; Young, C.A.S.

    2008-01-01

    There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups

  15. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  16. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    Science.gov (United States)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  17. Group heterogeneity increases the risks of large group size: a longitudinal study of productivity in research groups.

    Science.gov (United States)

    Cummings, Jonathon N; Kiesler, Sara; Bosagh Zadeh, Reza; Balakrishnan, Aruna D

    2013-06-01

    Heterogeneous groups are valuable, but differences among members can weaken group identification. Weak group identification may be especially problematic in larger groups, which, in contrast with smaller groups, require more attention to motivating members and coordinating their tasks. We hypothesized that as groups increase in size, productivity would decrease with greater heterogeneity. We studied the longitudinal productivity of 549 research groups varying in disciplinary heterogeneity, institutional heterogeneity, and size. We examined their publication and citation productivity before their projects started and 5 to 9 years later. Larger groups were more productive than smaller groups, but their marginal productivity declined as their heterogeneity increased, either because their members belonged to more disciplines or to more institutions. These results provide evidence that group heterogeneity moderates the effects of group size, and they suggest that desirable diversity in groups may be better leveraged in smaller, more cohesive units.

  18. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  19. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  20. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  1. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  2. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  3. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  4. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  5. Group theory for unified model building

    International Nuclear Information System (INIS)

    Slansky, R.

    1981-01-01

    The results gathered here on simple Lie algebras have been selected with attention to the needs of unified model builders who study Yang-Mills theories based on simple, local-symmetry groups that contain as a subgroup the SUsup(w) 2 x Usup(w) 1 x SUsup(c) 3 symmetry of the standard theory of electromagnetic, weak, and strong interactions. The major topics include, after a brief review of the standard model and its unification into a simple group, the use of Dynkin diagrams to analyze the structure of the group generators and to keep track of the weights (quantum numbers) of the representation vectors; an analysis of the subgroup structure of simple groups, including explicit coordinatizations of the projections in weight space; lists of representations, tensor products and branching rules for a number of simple groups; and other details about groups and their representations that are often helpful for surveying unified models, including vector-coupling coefficient calculations. Tabulations of representations, tensor products, and branching rules for E 6 , SO 10 , SU 6 , F 4 , SO 9 , SO 5 , SO 8 , SO 7 , SU 4 , E 7 , E 8 , SU 8 , SO 14 , SO 18 , SO 22 , and for completeness, SU 3 are included. (These tables may have other applications.) Group-theoretical techniques for analyzing symmetry breaking are described in detail and many examples are reviewed, including explicit parameterizations of mass matrices. (orig.)

  6. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  7. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  8. Hydrodynamic model research in Waseda group

    International Nuclear Information System (INIS)

    Muroya, Shin

    2010-01-01

    Constructing 'High Energy Material Science' had been proposed by Namiki as the guiding principle for the scientists of the high energy physics group lead by himself in Waseda University when the author started to study multiple particle production in 1980s toward the semi-phenomenological model for the quark gluon plasma (QGP). Their strategy was based on three stages to build an intermediate one between the fundamental theory of QCD and the phenomenological model. The quantum theoretical Langevin equation was taken up as the semi-phenomenological model at the intermediate stage and the Landau hydrodynamic model was chosen as the phenomenological model to focus on the 'phase transition' of QGP. A review is given here over the quantum theoretical Langevin equation formalism developed there and followed by the further progress with the 1+1 dimensional viscous fluid model as well as the hydrodynamic model with cylindrical symmetry. The developments of the baryon fluid model and Hanbury-Brown Twiss effect are also reviewed. After 1995 younger generation physicists came to the group to develop those models further. Activities by Hirano, Nonaka and Morita beyond the past generation's hydrodynamic model are picked up briefly. (S. Funahashi)

  9. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  10. Managing more than the mean: Using quantile regression to identify factors related to large elk groups

    Science.gov (United States)

    Brennan, Angela K.; Cross, Paul C.; Creely, Scott

    2015-01-01

    Summary Animal group size distributions are often right-skewed, whereby most groups are small, but most individuals occur in larger groups that may also disproportionately affect ecology and policy. In this case, examining covariates associated with upper quantiles of the group size distribution could facilitate better understanding and management of large animal groups.

  11. Group Buying Schemes : A Sustainable Business Model?

    OpenAIRE

    Köpp, Sebastian; Mukhachou, Aliaksei; Schwaninger, Markus

    2013-01-01

    Die Autoren gehen der Frage nach, ob "Group Buying Schemes" wie beispielsweise von den Unternehmen Groupon und Dein Deal angeboten, ein nachhaltiges Geschäftsmodell sind. Anhand der Fallstudie Groupon wird mit einem System Dynamics Modell festgestellt, dass das Geschäftsmodell geändert werden muss, wenn die Unternehmung auf Dauer lebensfähig sein soll. The authors examine if group buying schemes are a sustainable business model. By means of the Groupon case study and using a System Dynami...

  12. A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information

    Directory of Open Access Journals (Sweden)

    Li Yupeng

    2017-01-01

    Full Text Available Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a non-linear programming model, in which the original decision matrices are aggregated by using the interval-valued intuitionistic fuzzy weighted average operator. By solving the non-linear programming model with MATLAB®, the weights of the DMs and the fuzzy comprehensive decision matrix are determined. Then the weights of the criteria are calculated based on the information entropy theory. At last, the TOPSIS framework is employed to establish the decision process. The divergence between interval-valued intuitionistic fuzzy numbers is calculated by interval-valued intuitionistic fuzzy cross entropy. A real-world case study is constructed to elaborate the feasibility and effectiveness of the proposed methodology.

  13. Interacting star clusters in the Large Magellanic Cloud. Overmerging problem solved by cluster group formation

    Science.gov (United States)

    Leon, Stéphane; Bergond, Gilles; Vallenari, Antonella

    1999-04-01

    We present the tidal tail distributions of a sample of candidate binary clusters located in the bar of the Large Magellanic Cloud (LMC). One isolated cluster, SL 268, is presented in order to study the effect of the LMC tidal field. All the candidate binary clusters show tidal tails, confirming that the pairs are formed by physically linked objects. The stellar mass in the tails covers a large range, from 1.8x 10(3) to 3x 10(4) \\msun. We derive a total mass estimate for SL 268 and SL 356. At large radii, the projected density profiles of SL 268 and SL 356 fall off as r(-gamma ) , with gamma = 2.27 and gamma =3.44, respectively. Out of 4 pairs or multiple systems, 2 are older than the theoretical survival time of binary clusters (going from a few 10(6) years to 10(8) years). A pair shows too large age difference between the components to be consistent with classical theoretical models of binary cluster formation (Fujimoto & Kumai \\cite{fujimoto97}). We refer to this as the ``overmerging'' problem. A different scenario is proposed: the formation proceeds in large molecular complexes giving birth to groups of clusters over a few 10(7) years. In these groups the expected cluster encounter rate is larger, and tidal capture has higher probability. Cluster pairs are not born together through the splitting of the parent cloud, but formed later by tidal capture. For 3 pairs, we tentatively identify the star cluster group (SCG) memberships. The SCG formation, through the recent cluster starburst triggered by the LMC-SMC encounter, in contrast with the quiescent open cluster formation in the Milky Way can be an explanation to the paucity of binary clusters observed in our Galaxy. Based on observations collected at the European Southern Observatory, La Silla, Chile}

  14. Affine Poisson Groups and WZW Model

    Directory of Open Access Journals (Sweden)

    Ctirad Klimcík

    2008-01-01

    Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.

  15. Diagrammatic group theory in quark models

    International Nuclear Information System (INIS)

    Canning, G.P.

    1977-05-01

    A simple and systematic diagrammatic method is presented for calculating the numerical factors arising from group theory in quark models: dimensions, casimir invariants, vector coupling coefficients and especially recoupling coefficients. Some coefficients for the coupling of 3 quark objects are listed for SU(n) and SU(2n). (orig.) [de

  16. Group music performance causes elevated pain thresholds and social bonding in small and large groups of singers

    Science.gov (United States)

    Weinstein, Daniel; Launay, Jacques; Pearce, Eiluned; Dunbar, Robin I. M.; Stewart, Lauren

    2016-01-01

    Over our evolutionary history, humans have faced the problem of how to create and maintain social bonds in progressively larger groups compared to those of our primate ancestors. Evidence from historical and anthropological records suggests that group music-making might act as a mechanism by which this large-scale social bonding could occur. While previous research has shown effects of music making on social bonds in small group contexts, the question of whether this effect ‘scales up’ to larger groups is particularly important when considering the potential role of music for large-scale social bonding. The current study recruited individuals from a community choir that met in both small (n = 20 – 80) and large (a ‘megachoir’ combining individuals from the smaller subchoirs n = 232) group contexts. Participants gave self-report measures (via a survey) of social bonding and had pain threshold measurements taken (as a proxy for endorphin release) before and after 90 minutes of singing. Results showed that feelings of inclusion, connectivity, positive affect, and measures of endorphin release all increased across singing rehearsals and that the influence of group singing was comparable for pain thresholds in the large versus small group context. Levels of social closeness were found to be greater at pre- and post-levels for the small choir condition. However, the large choir condition experienced a greater change in social closeness as compared to the small condition. The finding that singing together fosters social closeness – even in large contexts where individuals are not known to each other – is consistent with evolutionary accounts that emphasize the role of music in social bonding, particularly in the context of creating larger cohesive groups than other primates are able to manage. PMID:27158219

  17. A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.

    Directory of Open Access Journals (Sweden)

    Gul Deniz Salali

    Full Text Available One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.

  18. Standard model group: Survival of the fittest

    Science.gov (United States)

    Nielsen, H. B.; Brene, N.

    1983-09-01

    The essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some "world (gauge) group". We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapses is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property.

  19. Standard model group: survival of the fittest

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, H.B. (Niels Bohr Inst., Copenhagen (Denmark); Nordisk Inst. for Teoretisk Atomfysik, Copenhagen (Denmark)); Brene, N. (Niels Bohr Inst., Copenhagen (Denmark))

    1983-09-19

    The essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapse is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property.

  20. Standard model group: survival of the fittest

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1983-01-01

    Th essential content of this paper is related to random dynamics. We speculate that the world seen through a sub-Planck-scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. We see that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. We further argue that the subgroup which survives as the end product of a possible chain of collapse is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property. (orig.)

  1. Standard model group survival of the fittest

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1983-02-01

    The essential content of this note is related to random dynamics. The authors speculate that the world seen through a sub Planck scale microscope has a lattice structure and that the dynamics on this lattice is almost completely random, except for the requirement that the random (plaquette) action is invariant under some ''world (gauge) group''. It is seen that the randomness may lead to spontaneous symmetry breakdown in the vacuum (spontaneous collapse) without explicit appeal to any scalar field associated with the usual Higgs mechanism. It is further argued that the subgroup which survives as the end product of a possible chain of collapses is likely to have certain properties; the most important is that it has a topologically connected center. The standard group, i.e. the group of the gauge theory which combines the Salam-Weinberg model with QCD, has this property. (Auth.)

  2. Medical Students Perceive Better Group Learning Processes when Large Classes Are Made to Seem Small

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A.; de Grave, Willem; Schuwirth, Lambert W. T.; Scherpbier, Albert J. J. A.; Bos, Gerard M. J.

    2014-01-01

    Objective Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. Design A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n = 50) as the intervention groups; a control group (n = 102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. Setting The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6–10 weeks. Intervention The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Main Outcome Measures Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Results Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β = 0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>−0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Conclusion Better group learning processes can be

  3. Medical students perceive better group learning processes when large classes are made to seem small.

    Science.gov (United States)

    Hommes, Juliette; Arah, Onyebuchi A; de Grave, Willem; Schuwirth, Lambert W T; Scherpbier, Albert J J A; Bos, Gerard M J

    2014-01-01

    Medical schools struggle with large classes, which might interfere with the effectiveness of learning within small groups due to students being unfamiliar to fellow students. The aim of this study was to assess the effects of making a large class seem small on the students' collaborative learning processes. A randomised controlled intervention study was undertaken to make a large class seem small, without the need to reduce the number of students enrolling in the medical programme. The class was divided into subsets: two small subsets (n=50) as the intervention groups; a control group (n=102) was mixed with the remaining students (the non-randomised group n∼100) to create one large subset. The undergraduate curriculum of the Maastricht Medical School, applying the Problem-Based Learning principles. In this learning context, students learn mainly in tutorial groups, composed randomly from a large class every 6-10 weeks. The formal group learning activities were organised within the subsets. Students from the intervention groups met frequently within the formal groups, in contrast to the students from the large subset who hardly enrolled with the same students in formal activities. Three outcome measures assessed students' group learning processes over time: learning within formally organised small groups, learning with other students in the informal context and perceptions of the intervention. Formal group learning processes were perceived more positive in the intervention groups from the second study year on, with a mean increase of β=0.48. Informal group learning activities occurred almost exclusively within the subsets as defined by the intervention from the first week involved in the medical curriculum (E-I indexes>-0.69). Interviews tapped mainly positive effects and negligible negative side effects of the intervention. Better group learning processes can be achieved in large medical schools by making large classes seem small.

  4. A Nationwide Overview of Sight-Singing Requirements of Large-Group Choral Festivals

    Science.gov (United States)

    Norris, Charles E.

    2004-01-01

    The purpose of this study was to examine sight-singing requirements at junior and senior high school large-group ratings-based choral festivals throughout the United States. Responses to the following questions were sought from each state: (1) Are there ratings-based large-group choral festivals? (2) Is sight-singing a requirement? (3) Are there…

  5. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    Science.gov (United States)

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem

  6. Group Chaos Theory: A Metaphor and Model for Group Work

    Science.gov (United States)

    Rivera, Edil Torres; Wilbur, Michael; Frank-Saraceni, James; Roberts-Wilbur, Janice; Phan, Loan T.; Garrett, Michael T.

    2005-01-01

    Group phenomena and interactions are described through the use of the chaos theory constructs and characteristics of sensitive dependence on initial conditions, phase space, turbulence, emergence, self-organization, dissipation, iteration, bifurcation, and attractors and fractals. These constructs and theoretical tenets are presented as applicable…

  7. A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information

    OpenAIRE

    Li Yupeng; Lian Xiaozhen; Lu Cheng; Wang Zhaotong

    2017-01-01

    Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs) is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a...

  8. Black holes from large N singlet models

    Science.gov (United States)

    Amado, Irene; Sundborg, Bo; Thorlacius, Larus; Wintergerst, Nico

    2018-03-01

    The emergent nature of spacetime geometry and black holes can be directly probed in simple holographic duals of higher spin gravity and tensionless string theory. To this end, we study time dependent thermal correlation functions of gauge invariant observables in suitably chosen free large N gauge theories. At low temperature and on short time scales the correlation functions encode propagation through an approximate AdS spacetime while interesting departures emerge at high temperature and on longer time scales. This includes the existence of evanescent modes and the exponential decay of time dependent boundary correlations, both of which are well known indicators of bulk black holes in AdS/CFT. In addition, a new time scale emerges after which the correlation functions return to a bulk thermal AdS form up to an overall temperature dependent normalization. A corresponding length scale was seen in equal time correlation functions in the same models in our earlier work.

  9. Computational social dynamic modeling of group recruitment.

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  10. Large Sets in Boolean and Non-Boolean Groups and Topology

    Directory of Open Access Journals (Sweden)

    Ol’ga V. Sipacheva

    2017-10-01

    Full Text Available Various notions of large sets in groups, including the classical notions of thick, syndetic, and piecewise syndetic sets and the new notion of vast sets in groups, are studied with emphasis on the interplay between such sets in Boolean groups. Natural topologies closely related to vast sets are considered; as a byproduct, interesting relations between vast sets and ultrafilters are revealed.

  11. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  12. Fuzzy classification of phantom parent groups in an animal model

    Directory of Open Access Journals (Sweden)

    Fikse Freddy

    2009-09-01

    Full Text Available Abstract Background Genetic evaluation models often include genetic groups to account for unequal genetic level of animals with unknown parentage. The definition of phantom parent groups usually includes a time component (e.g. years. Combining several time periods to ensure sufficiently large groups may create problems since all phantom parents in a group are considered contemporaries. Methods To avoid the downside of such distinct classification, a fuzzy logic approach is suggested. A phantom parent can be assigned to several genetic groups, with proportions between zero and one that sum to one. Rules were presented for assigning coefficients to the inverse of the relationship matrix for fuzzy-classified genetic groups. This approach was illustrated with simulated data from ten generations of mass selection. Observations and pedigree records were randomly deleted. Phantom parent groups were defined on the basis of gender and generation number. In one scenario, uncertainty about generation of birth was simulated for some animals with unknown parents. In the distinct classification, one of the two possible generations of birth was randomly chosen to assign phantom parents to genetic groups for animals with simulated uncertainty, whereas the phantom parents were assigned to both possible genetic groups in the fuzzy classification. Results The empirical prediction error variance (PEV was somewhat lower for fuzzy-classified genetic groups. The ranking of animals with unknown parents was more correct and less variable across replicates in comparison with distinct genetic groups. In another scenario, each phantom parent was assigned to three groups, one pertaining to its gender, and two pertaining to the first and last generation, with proportion depending on the (true generation of birth. Due to the lower number of groups, the empirical PEV of breeding values was smaller when genetic groups were fuzzy-classified. Conclusion Fuzzy

  13. Large and small sets with respect to homomorphisms and products of groups

    Directory of Open Access Journals (Sweden)

    Riccardo Gusso

    2002-10-01

    Full Text Available We study the behaviour of large, small and medium subsets with respect to homomorphisms and products of groups. Then we introduce the definition af a P-small set in abelian groups and we investigate the relations between this kind of smallness and the previous one, giving some examples that distinguish them.

  14. Renormalisation group improved leptogenesis in family symmetry models

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Luhn, Christoph

    2012-01-01

    We study renormalisation group (RG) corrections relevant for leptogenesis in the case of family symmetry models such as the Altarelli-Feruglio A 4 model of tri-bimaximal lepton mixing or its extension to tri-maximal mixing. Such corrections are particularly relevant since in large classes of family symmetry models, to leading order, the CP violating parameters of leptogenesis would be identically zero at the family symmetry breaking scale, due to the form dominance property. We find that RG corrections violate form dominance and enable such models to yield viable leptogenesis at the scale of right-handed neutrino masses. More generally, the results of this paper show that RG corrections to leptogenesis cannot be ignored for any family symmetry model involving sizeable neutrino and τ Yukawa couplings.

  15. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  16. Large Scale Management of Physicists Personal Analysis Data Without Employing User and Group Quotas

    International Nuclear Information System (INIS)

    Norman, A.; Diesbug, M.; Gheith, M.; Illingworth, R.; Lyon, A.; Mengel, M.

    2015-01-01

    The ability of modern HEP experiments to acquire and process unprecedented amounts of data and simulation have lead to an explosion in the volume of information that individual scientists deal with on a daily basis. Explosion has resulted in a need for individuals to generate and keep large personal analysis data sets which represent the skimmed portions of official data collections, pertaining to their specific analysis. While a significant reduction in size compared to the original data, these personal analysis and simulation sets can be many terabytes or 10s of TB in size and consist of 10s of thousands of files. When this personal data is aggregated across the many physicists in a single analysis group or experiment it can represent data volumes on par or exceeding the official production samples which require special data handling techniques to deal with effectively.In this paper we explore the changes to the Fermilab computing infrastructure and computing models which have been developed to allow experimenters to effectively manage their personal analysis data and other data that falls outside of the typically centrally managed production chains. In particular we describe the models and tools that are being used to provide the modern neutrino experiments like NOvA with storage resources that are sufficient to meet their analysis needs, without imposing specific quotas on users or groups of users. We discuss the storage mechanisms and the caching algorithms that are being used as well as the toolkits are have been developed to allow the users to easily operate with terascale+ datasets. (paper)

  17. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  18. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  19. Random Coefficient Logit Model for Large Datasets

    NARCIS (Netherlands)

    C. Hernández-Mireles (Carlos); D. Fok (Dennis)

    2010-01-01

    textabstractWe present an approach for analyzing market shares and products price elasticities based on large datasets containing aggregate sales data for many products, several markets and for relatively long time periods. We consider the recently proposed Bayesian approach of Jiang et al [Jiang,

  20. Concepts for Future Large Fire Modeling

    Science.gov (United States)

    A. P. Dimitrakopoulos; R. E. Martin

    1987-01-01

    A small number of fires escape initial attack suppression efforts and become large, but their effects are significant and disproportionate. In 1983, of 200,000 wildland fires in the United States, only 4,000 exceeded 100 acres. However, these escaped fires accounted for roughly 95 percent of wildfire-related costs and damages (Pyne, 1984). Thus, future research efforts...

  1. An Audit of the Effectiveness of Large Group Neurology Tutorials for Irish Undergraduate Medical Students

    LENUS (Irish Health Repository)

    Kearney, H

    2016-07-01

    The aim of this audit was to determine the effectiveness of large group tutorials for teaching neurology to medical students. Students were asked to complete a questionnaire rating their confidence on a ten point Likert scale in a number of domains in the undergraduate education guidelines from the Association of British Neurologists (ABN). We then arranged a series of interactive large group tutorials for the class and repeated the questionnaire one month after teaching. In the three core domains of neurological: history taking, examination and differential diagnosis, none of the students rated their confidence as nine or ten out of ten prior to teaching. This increased to 6% for history taking, 12 % in examination and 25% for differential diagnosis after eight weeks of tutorials. This audit demonstrates that in our centre, large group tutorials were an effective means of teaching, as measured by the ABN guidelines in undergraduate neurology.

  2. From evolution to revolution: understanding mutability in large and disruptive human groups

    Science.gov (United States)

    Whitaker, Roger M.; Felmlee, Diane; Verma, Dinesh C.; Preece, Alun; Williams, Grace-Rose

    2017-05-01

    Over the last 70 years there has been a major shift in the threats to global peace. While the 1950's and 1960's were characterised by the cold war and the arms race, many security threats are now characterised by group behaviours that are disruptive, subversive or extreme. In many cases such groups are loosely and chaotically organised, but their ideals are sociologically and psychologically embedded in group members to the extent that the group represents a major threat. As a result, insights into how human groups form, emerge and change are critical, but surprisingly limited insights into the mutability of human groups exist. In this paper we argue that important clues to understand the mutability of groups come from examining the evolutionary origins of human behaviour. In particular, groups have been instrumental in human evolution, used as a basis to derive survival advantage, leaving all humans with a basic disposition to navigate the world through social networking and managing their presence in a group. From this analysis we present five critical features of social groups that govern mutability, relating to social norms, individual standing, status rivalry, ingroup bias and cooperation. We argue that understanding how these five dimensions interact and evolve can provide new insights into group mutation and evolution. Importantly, these features lend themselves to digital modeling. Therefore computational simulation can support generative exploration of groups and the discovery of latent factors, relevant to both internal group and external group modelling. Finally we consider the role of online social media in relation to understanding the mutability of groups. This can play an active role in supporting collective behaviour, and analysis of social media in the context of the five dimensions of group mutability provides a fresh basis to interpret the forces affecting groups.

  3. Fluid Methods for Modeling Large, Heterogeneous Networks

    National Research Council Canada - National Science Library

    Towsley, Don; Gong, Weibo; Hollot, Kris; Liu, Yong; Misra, Vishal

    2005-01-01

    .... The resulting fluid models were used to develop novel active queue management mechanisms resulting in more stable TCP performance and novel rate controllers for the purpose of providing minimum rate...

  4. Revisiting the merits of a mandatory large group classroom learning format: an MD-MBA perspective.

    Science.gov (United States)

    Li, Shawn X; Pinto-Powell, Roshini

    2017-01-01

    The role of classroom learning in medical education is rapidly changing. To promote active learning and reduce student stress, medical schools have adopted policies such as pass/fail curriculums and recorded lectures. These policies along with the rising importance of the USMLE (United States Medical Licensing Examination) exams have made asynchronous learning popular to the detriment of classroom learning. In contrast to this model, modern day business schools employ mandatory large group classes with assigned seating and cold-calling. Despite similar student demographics, medical and business schools have adopted vastly different approaches to the classroom. When examining the classroom dynamic at business schools with mandatory classes, it is evident that there's an abundance of engaging discourse and peer learning objectives that medical schools share. Mandatory classes leverage the network effect just like social media forums such as Facebook and Twitter. That is, the value of a classroom discussion increases when more students are present to participate. At a time when students are savvy consumers of knowledge, the classroom is competing against an explosion of study aids dedicated to USMLE preparation. Certainly, the purpose of medical school is not solely about the efficient transfer of knowledge - but to train authentic, competent, and complete physicians. To accomplish this, we must promote the inimitable and deeply personal interactions amongst faculty and students. When viewed through this lens, mandatory classes might just be a way for medical schools to leverage their competitive advantage in educating the complete physician.

  5. Establishing Peer Mentor-Led Writing Groups in Large First-Year Courses

    Science.gov (United States)

    Marcoux, Sarah; Marken, Liv; Yu, Stan

    2012-01-01

    This paper describes the results of a pilot project designed to improve students' academic writing in a large (200-student) first-year Agriculture class at the University of Saskatchewan. In collaboration with the course's professor, the Writing Centre coordinator and a summer student designed curriculum for four two-hour Writing Group sessions…

  6. Laboratory modeling of aspects of large fires

    Science.gov (United States)

    Carrier, G. F.; Fendell, F. E.; Fleeter, R. D.; Gat, N.; Cohen, L. M.

    1984-04-01

    The design, construction, and use of a laboratory-scale combustion tunnel for simulating aspects of large-scale free-burning fires are described. The facility consists of an enclosed, rectangular-cross section (1.12 m wide x 1.27 m high) test section of about 5.6 m in length, fitted with large sidewall windows for viewing. A long upwind section permits smoothing (by screens and honeycombs) of a forced-convective flow, generated by a fan and adjustable in wind speed (up to a maximum speed of about 20 m/s prior to smoothing). Special provision is made for unconstrained ascent of a strongly buoyant plume, the duct over the test section being about 7 m in height. Also, a translatable test-section ceiling can be used to prevent jet-type spreading into the duct of the impressed flow; that is, the wind arriving at a site (say) half-way along the test section can be made (by ceiling movement) approximately the same as that at the leading edge of the test section with a fully open duct (fully retracted ceiling). Of particular interest here are the rate and structure of wind-aided flame spread streamwise along a uniform matrix of vertically oriented small fuel elements (such as toothpicks or coffee-strirrers), implanted in clay stratum on the test-section floor; this experiment is motivated by flame spread across strewn debris, such as may be anticipated in an urban environment after severe blast damage.

  7. Modelling group decision simulation through argumentation

    OpenAIRE

    Marreiros, Goreti; Novais, Paulo; Machado, José; Ramos, Carlos; Neves, José

    2007-01-01

    Group decision making plays an important role in today’s organisations. The impact of decision making is so high and complex, that rarely the decision making process is made individually. In Group Decision Argumentation, there is a set of participants, with different profiles and expertise levels, that exchange ideas or engage in a process of argumentation and counter-argumentation, negotiate, cooperate, collaborate or even discuss techniques and/or methodologies for problem solving. In this ...

  8. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  9. Budget model can aid group practice planning.

    Science.gov (United States)

    Bender, A D

    1991-12-01

    A medical practice can enhance its planning by developing a budgetary model to test effects of planning assumptions on its profitability and cash requirements. A model focusing on patient visits, payment mix, patient mix, and fee and payment schedules can help assess effects of proposed decisions. A planning model is not a substitute for planning but should complement a plan that includes mission, goals, values, strategic issues, and different outcomes.

  10. Grid computing in large pharmaceutical molecular modeling.

    Science.gov (United States)

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  11. Qualitative Analysis of Collaborative Learning Groups in Large Enrollment Introductory Astronomy

    Science.gov (United States)

    Skala, Chija; Slater, Timothy F.; Adams, Jeffrey P.

    2000-08-01

    Large-lecture introductory astronomy courses for undergraduate, non-science majors present numerous problems for faculty. As part of a systematic effort to improve the course learning environment, a series of small-group, collaborative learning activities were implemented in an otherwise conventional lecture astronomy survey course. These activities were used once each week during the regularly scheduled lecture period. After eight weeks, ten focus group interviews were conducted to qualitatively assess the impact and dynamics of these small group learning activities. Overall, the data strongly suggest that students enjoy participating in the in-class learning activities in learning teams of three to four students. These students firmly believe that they are learning more than they would from lectures alone. Inductive analysis of the transcripts revealed five major themes prevalent among the students' perspectives: (1) self-formed, cooperative group composition and formation should be more regulated by the instructor; (2) team members' assigned rolls should be less formally structured by the instructors; (3) cooperative groups helped in learning the course content; (4) time constraints on lectures and activities need to be more carefully aligned; and (5) gender issues can exist within the groups. These themes serve as a guide for instructors who are developing instructional interventions for large lecture courses.

  12. Exactly soluble models for surface partition of large clusters

    International Nuclear Information System (INIS)

    Bugaev, K.A.; Bugaev, K.A.; Elliott, J.B.

    2007-01-01

    The surface partition of large clusters is studied analytically within a framework of the 'Hills and Dales Model'. Three formulations are solved exactly by using the Laplace-Fourier transformation method. In the limit of small amplitude deformations, the 'Hills and Dales Model' gives the upper and lower bounds for the surface entropy coefficient of large clusters. The found surface entropy coefficients are compared with those of large clusters within the 2- and 3-dimensional Ising models

  13. Group Modeling in Social Learning Environments

    Science.gov (United States)

    Stankov, Slavomir; Glavinic, Vlado; Krpan, Divna

    2012-01-01

    Students' collaboration while learning could provide better learning environments. Collaboration assumes social interactions which occur in student groups. Social theories emphasize positive influence of such interactions on learning. In order to create an appropriate learning environment that enables social interactions, it is important to…

  14. Beyond the Standard Model: Working group report

    Indian Academy of Sciences (India)

    Right-handed neutrino production in hot dense plasmas and constraints on the ... We thank all the participants of this Working Group for their all-round cooperation. The work of AR has been supported by grants from the Department of Science ...

  15. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology

    Directory of Open Access Journals (Sweden)

    Janice M. Siegford

    2016-02-01

    Full Text Available Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  16. Assessing Activity and Location of Individual Laying Hens in Large Groups Using Modern Technology.

    Science.gov (United States)

    Siegford, Janice M; Berezowski, John; Biswas, Subir K; Daigle, Courtney L; Gebhardt-Henrich, Sabine G; Hernandez, Carlos E; Thurner, Stefan; Toscano, Michael J

    2016-02-02

    Tracking individual animals within large groups is increasingly possible, offering an exciting opportunity to researchers. Whereas previously only relatively indistinguishable groups of individual animals could be observed and combined into pen level data, we can now focus on individual actors within these large groups and track their activities across time and space with minimal intervention and disturbance. The development is particularly relevant to the poultry industry as, due to a shift away from battery cages, flock sizes are increasingly becoming larger and environments more complex. Many efforts have been made to track individual bird behavior and activity in large groups using a variety of methodologies with variable success. Of the technologies in use, each has associated benefits and detriments, which can make the approach more or less suitable for certain environments and experiments. Within this article, we have divided several tracking systems that are currently available into two major categories (radio frequency identification and radio signal strength) and review the strengths and weaknesses of each, as well as environments or conditions for which they may be most suitable. We also describe related topics including types of analysis for the data and concerns with selecting focal birds.

  17. Large-scale shifts in phytoplankton groups in the Equatorial Pacific during ENSO cycles

    Directory of Open Access Journals (Sweden)

    I. Masotti

    2011-03-01

    Full Text Available The El Niño Southern Oscillation (ENSO drives important changes in the marine productivity of the Equatorial Pacific, in particular during major El Niño/La Niña transitions. Changes in environmental conditions associated with these climatic events also likely impact phytoplankton composition. In this work, the distribution of four major phytoplankton groups (nanoeucaryotes, Prochlorococcus, Synechococcus, and diatoms was examined between 1996 and 2007 by applying the PHYSAT algorithm to the ocean color data archive from the Ocean Color and Temperature Sensor (OCTS and Sea-viewing Wide Field-of-view Sensor (SeaWiFS. Coincident with the decrease in chlorophyll concentrations, a large-scale shift in the phytoplankton composition of the Equatorial Pacific, that was characterized by a decrease in Synechococcus and an increase in nanoeucaryote dominance, was observed during the early stages of both the strong El Niño of 1997 and the moderate El Niño of 2006. A significant increase in diatoms dominance was observed in the Equatorial Pacific during the 1998 La Niña and was associated with elevated marine productivity. An analysis of the environmental variables using a coupled physical-biogeochemical model (NEMO-PISCES suggests that the Synechococcus dominance decrease during the two El Niño events was associated with an abrupt decline in nutrient availability (−0.9 to −2.5 μM NO3 month−1. Alternatively, increased nutrient availability (3 μM NO3 month−1 during the 1998 La Niña resulted in Equatorial Pacific dominance diatom increase. Despite these phytoplankton community shifts, the mean composition is restored after a few months, which suggests resilience in community structure.

  18. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  19. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  20. Activity of CERN and LNF groups on large area GEM detectors

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, M. [CERN, Geneva (Switzerland); Bencivenni, G. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Brock, I. [Physikalisches Institute der Universitat Bonn, Bonn (Germany); Cerioni, S. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Croci, G.; David, E. [CERN, Geneva (Switzerland); De Lucia, E. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); De Oliveira, R. [CERN, Geneva (Switzerland); De Robertis, G. [Sezione INFN di Bari, Bari (Italy); Domenici, D., E-mail: Danilo.Domenici@lnf.infn.i [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Duarte Pinto, S. [CERN, Geneva (Switzerland); Felici, G.; Gatta, M.; Jacewicz, M. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Loddo, F. [Sezione INFN di Bari, Bari (Italy); Morello, G. [Dipeartimento di Fisica Universita della Calabria e INFN, Cosenza (Italy); Pistilli, M. [Laboratori Nazionali di Frascati dell' INFN, Frascati (Italy); Ranieri, A. [Sezione INFN di Bari, Bari (Italy); Ropelewski, L. [CERN, Geneva (Switzerland); Sauli, F. [TERA Foundation, Novara (Italy)

    2010-05-21

    We report on the activity of CERN and INFN-LNF groups on the development of large area GEM detectors. The two groups work together within the RD51 Collaboration, to aim at the development of Micro-pattern Gas detectors technologies. The vast request for large area foils by the GEM community has driven a change in the manufacturing procedure by the TS-DEM-PMT laboratory, needed to overcome the previous size limitation of 450x450mm{sup 2}. Now a single-mask technology is used allowing foils to be made as large as 450x2000mm{sup 2}. The limitation in the short size, due to the definite width of the raw material, can be overcome by splicing more foils together. A 10x10cm{sup 2} GEM detector with the new single-mask foil has been tested with X-rays and the results are shown. Possible future applications for large area GEM are the TOTEM experiment upgrade at CERN, and the KLOE-2 experiment at the Dafne {Phi}-factory in Frascati.

  1. Activity of CERN and LNF groups on large area GEM detectors

    International Nuclear Information System (INIS)

    Alfonsi, M.; Bencivenni, G.; Brock, I.; Cerioni, S.; Croci, G.; David, E.; De Lucia, E.; De Oliveira, R.; De Robertis, G.; Domenici, D.; Duarte Pinto, S.; Felici, G.; Gatta, M.; Jacewicz, M.; Loddo, F.; Morello, G.; Pistilli, M.; Ranieri, A.; Ropelewski, L.; Sauli, F.

    2010-01-01

    We report on the activity of CERN and INFN-LNF groups on the development of large area GEM detectors. The two groups work together within the RD51 Collaboration, to aim at the development of Micro-pattern Gas detectors technologies. The vast request for large area foils by the GEM community has driven a change in the manufacturing procedure by the TS-DEM-PMT laboratory, needed to overcome the previous size limitation of 450x450mm 2 . Now a single-mask technology is used allowing foils to be made as large as 450x2000mm 2 . The limitation in the short size, due to the definite width of the raw material, can be overcome by splicing more foils together. A 10x10cm 2 GEM detector with the new single-mask foil has been tested with X-rays and the results are shown. Possible future applications for large area GEM are the TOTEM experiment upgrade at CERN, and the KLOE-2 experiment at the Dafne Φ-factory in Frascati.

  2. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  3. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  5. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  6. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  7. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  8. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  9. Seasonal patterns of mixed species groups in large East African mammals.

    Science.gov (United States)

    Kiffner, Christian; Kioko, John; Leweri, Cecilia; Krause, Stefan

    2014-01-01

    Mixed mammal species groups are common in East African savannah ecosystems. Yet, it is largely unknown if co-occurrences of large mammals result from random processes or social preferences and if interspecific associations are consistent across ecosystems and seasons. Because species may exchange important information and services, understanding patterns and drivers of heterospecific interactions is crucial for advancing animal and community ecology. We recorded 5403 single and multi-species clusters in the Serengeti-Ngorongoro and Tarangire-Manyara ecosystems during dry and wet seasons and used social network analyses to detect patterns of species associations. We found statistically significant associations between multiple species and association patterns differed spatially and seasonally. Consistently, wildebeest and zebras preferred being associated with other species, whereas carnivores, African elephants, Maasai giraffes and Kirk's dik-diks avoided being in mixed groups. During the dry season, we found that the betweenness (a measure of importance in the flow of information or disease) of species did not differ from a random expectation based on species abundance. In contrast, in the wet season, we found that these patterns were not simply explained by variations in abundances, suggesting that heterospecific associations were actively formed. These seasonal differences in observed patterns suggest that interspecific associations may be driven by resource overlap when resources are limited and by resource partitioning or anti-predator advantages when resources are abundant. We discuss potential mechanisms that could drive seasonal variation in the cost-benefit tradeoffs that underpin the formation of mixed-species groups.

  10. Rapid monitoring of large groups of internally contaminated people following a radiation accident

    International Nuclear Information System (INIS)

    1994-05-01

    In the management of an emergency, it is necessary to assess the radiation exposures of people in the affected areas. An essential component in the programme is the monitoring of internal contamination. Existing fixed installations for the assessment of incorporated radionuclides may be of limited value in these circumstances because they may be inconveniently sited, oversensitive for the purpose, or inadequately equipped and staffed to cope with the large numbers referred to them. The IAEA considered it important to produce guidance on rapid monitoring of large groups of internally contaminated people. The purpose of this document is to provide Member States with an overview on techniques that can be applied during abnormal or accidental situations. Refs and figs

  11. Drivers Advancing Oral Health in a Large Group Dental Practice Organization.

    Science.gov (United States)

    Simmons, Kristen; Gibson, Stephanie; White, Joel M

    2016-06-01

    Three change drivers are being implemented to high standards of patient centric and evidence-based oral health care within the context of a large multispecialty dental group practice organization based on the commitment of the dental hygienist chief operating officer and her team. A recent environmental scan elucidated 6 change drivers that can impact the provision of oral health care. Practitioners who can embrace and maximize aspects of these change drivers will move dentistry forward and create future opportunities. This article explains how 3 of these change drivers are being applied in a privately held, accountable risk-bearing entity that provides individualized treatment programs for more than 417,000 members. To facilitate integration of the conceptual changes related to the drivers, a multi-institutional, multidisciplinary, highly functioning collaborative work group was formed. The document Dental Hygiene at a Crossroads for Change(1) inspired the first author, a dental hygienist in a unique position as chief operating officer of a large group practice, to pursue evidence-based organizational change and to impact the quality of patient care. This was accomplished by implementing technological advances including dental diagnosis terminology in the electronic health record, clinical decision support, standardized treatment guidelines, quality metrics, and patient engagement to improve oral health outcomes at the patient and population levels. The systems and processes used to implement 3 change drivers into a large multi-practice dental setting is presented to inform and inspire others to implement change drivers with the potential for advancing oral health. Technology implementing best practices and improving patient engagement are excellent drivers to advance oral health and are an effective use of oral health care dollars. Improved oral health can be leveraged through technological advances to improve clinical practice. Copyright © 2016 Elsevier Inc

  12. Comparison of the large muscle group widths of the pelvic limb in seven breeds of dogs.

    Science.gov (United States)

    Sabanci, Seyyid Said; Ocal, Mehmet Kamil

    2018-05-14

    Orthopaedic diseases are common in the pelvic limbs of dogs, and reference values for large muscle groups of the pelvic limb may aid in diagnosis such diseases. As such, the objective of this study was to compare the large muscle groups of the pelvic limb in seven breeds of dogs. A total of 126 dogs from different breeds were included, and the widths of the quadriceps, hamstring and gastrocnemius muscles were measured from images of the lateral radiographies. The width of the quadriceps was not different between the breeds, but the widths of the hamstring and gastrocnemius muscles were significantly different between the breeds. The widest hamstring and gastrocnemius muscles were seen in the Rottweilers and the Boxers, respectively. The narrowest hamstring and gastrocnemius muscles were seen in the Belgian Malinois and the Golden retrievers, respectively. All ratios between the measured muscles differed significantly between the breeds. Doberman pinschers and Belgian Malinois had the highest ratio of gastrocnemius width:hamstring width. Doberman pinschers had also the highest ratio of quadriceps width:hamstring width. German shepherds had the highest ratio of gastrocnemius width:quadriceps width. The lowest ratios of quadriceps width:hamstring width were determined in the German shepherds. The ratios of the muscle widths may be used as reference values to assess muscular atrophy or hypertrophy in cases of bilateral or unilateral orthopaedic diseases of the pelvic limbs. Further studies are required to determine the widths and ratios of the large muscle groups of the pelvic limbs in other dog breeds. © 2018 Blackwell Verlag GmbH.

  13. Evaluation of receptivity of the medical students in a lecture of a large group

    Directory of Open Access Journals (Sweden)

    Vidyarthi SurendraK, Nayak RoopaP, GuptaSandeep K

    2014-04-01

    Full Text Available Background: Lecturing is widely used teaching method in higher education. Instructors of large classes may have only option to deliver lecture to convey informations to large group students.Aims and Objectives: The present study was to evaluate the effectiveness/receptivity of interactive lecturing in a large group of MBBS second year students. Material and Methods: The present study was conducted in the well-equipped lecture theater of Dhanalakshmi Srinivasan Medical College and Hospital (DSMCH, Tamil Nadu. A fully prepared interactive lecture on the specific topic was delivered by using power point presentation for second year MBBS students. Before start to deliver the lecture, instructor distributed multiple choice 10 questionnaires to attempt within 10 minutes. After 30 minutes of delivering lecture, again instructor distributed same 10 sets of multiple choice questionnaires to attempt in 10 minutes. The topic was never disclosed to the students before to deliver the lecture. Statistics: We analyzed the pre-lecture & post-lecture questions of each student by applying the paired t-test formula by using www.openepi.com version 3.01 online/offline software and by using Microsoft Excel Sheet Windows 2010. Results: The 31 male, 80 female including 111 students of average age 18.58 years baseline (pre-lecture receptivity mean % was 30.99 ± 14.64 and post-lecture receptivity mean % was increased upto 53.51± 19.52. The only 12 students out of 111 post-lecture receptivity values was less (mean % 25.8± 10.84 than the baseline (mean % 45± 9.05 receptive value and this reduction of receptivity was more towards negative side. Conclusion: In interactive lecture session with power point presentation students/learners can learn, even in large-class environments, but it should be active-learner centered.

  14. All polymer chip for amperometric studies of transmitter release from large groups of neuronal cells

    DEFF Research Database (Denmark)

    Larsen, Simon T.; Taboryski, Rafael

    2012-01-01

    We present an all polymer electrochemical chip for simple detection of transmitter release from large groups of cultured PC 12 cells. Conductive polymer PEDOT:tosylate microelectrodes were used together with constant potential amperometry to obtain easy-to-analyze oxidation signals from potassium......-induced release of transmitter molecules. The nature of the resulting current peaks is discussed, and the time for restoring transmitter reservoirs is studied. The relationship between released transmitters and potassium concentration was found to fit to a sigmoidal dose–response curve. Finally, we demonstrate...

  15. Group Clustering Mechanism for P2P Large Scale Data Sharing Collaboration

    Institute of Scientific and Technical Information of China (English)

    DENGQianni; LUXinda; CHENLi

    2005-01-01

    Research shows that P2P scientific collaboration network will exhibit small-world topology, as do a large number of social networks for which the same pattern has been documented. In this paper we propose a topology building protocol to benefit from the small world feature. We find that the idea of Freenet resembles the dynamic pattern of social interactions in scientific data sharing and the small world characteristic of Freenet is propitious to improve the file locating performance in scientificdata sharing. But the LRU (Least recently used) datas-tore cache replacement scheme of Freenet is not suitableto be used in scientific data sharing network. Based onthe group locality of scientific collaboration, we proposean enhanced group clustering cache replacement scheme.Simulation shows that this scheme improves the request hitratio dramatically while keeping the small average hops per successful request comparable to LRU.

  16. Social management of laboratory rhesus macaques housed in large groups using a network approach: A review.

    Science.gov (United States)

    McCowan, Brenda; Beisner, Brianne; Hannibal, Darcy

    2017-12-07

    Biomedical facilities across the nation and worldwide aim to develop cost-effective methods for the reproductive management of macaque breeding groups, typically by housing macaques in large, multi-male multi-female social groups that provide monkey subjects for research as well as appropriate socialization for their psychological well-being. One of the most difficult problems in managing socially housed macaques is their propensity for deleterious aggression. From a management perspective, deleterious aggression (as opposed to less intense aggression that serves to regulate social relationships) is undoubtedly the most problematic behavior observed in group-housed macaques, which can readily escalate to the degree that it causes social instability, increases serious physical trauma leading to group dissolution, and reduces psychological well-being. Thus for both welfare and other management reasons, aggression among rhesus macaques at primate centers and facilities needs to be addressed with a more proactive approach.Management strategies need to be instituted that maximize social housing while also reducing problematic social aggression due to instability using efficacious methods for detection and prevention in the most cost effective manner. Herein we review a new proactive approach using social network analysis to assess and predict deleterious aggression in macaque groups. We discovered three major pathways leading to instability, such as unusually high rates and severity of trauma and social relocations.These pathways are linked either directly or indirectly to network structure in rhesus macaque societies. We define these pathways according to the key intrinsic and extrinsic variables (e.g., demographic, genetic or social factors) that influence network and behavioral measures of stability (see Fig. 1). They are: (1) presence of natal males, (2) matrilineal genetic fragmentation, and (3) the power structure and conflict policing behavior supported by this

  17. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  18. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  19. An evolutionary theory of large-scale human warfare: Group-structured cultural selection.

    Science.gov (United States)

    Zefferman, Matthew R; Mathew, Sarah

    2015-01-01

    When humans wage war, it is not unusual for battlefields to be strewn with dead warriors. These warriors typically were men in their reproductive prime who, had they not died in battle, might have gone on to father more children. Typically, they are also genetically unrelated to one another. We know of no other animal species in which reproductively capable, genetically unrelated individuals risk their lives in this manner. Because the immense private costs borne by individual warriors create benefits that are shared widely by others in their group, warfare is a stark evolutionary puzzle that is difficult to explain. Although several scholars have posited models of the evolution of human warfare, these models do not adequately explain how humans solve the problem of collective action in warfare at the evolutionarily novel scale of hundreds of genetically unrelated individuals. We propose that group-structured cultural selection explains this phenomenon. © 2015 Wiley Periodicals, Inc.

  20. Efficacy of formative evaluation using a focus group for a large classroom setting in an accelerated pharmacy program.

    Science.gov (United States)

    Nolette, Shaun; Nguyen, Alyssa; Kogan, David; Oswald, Catherine; Whittaker, Alana; Chakraborty, Arup

    2017-07-01

    Formative evaluation is a process utilized to improve communication between students and faculty. This evaluation method allows the ability to address pertinent issues in a timely manner; however, implementation of formative evaluation can be a challenge, especially in a large classroom setting. Using mediated formative evaluation, the purpose of this study is to determine if a student based focus group is a viable option to improve efficacy of communication between an instructor and students as well as time management in a large classroom setting. Out of 140 total students, six students were selected to form a focus group - one from each of six total sections of the classroom. Each focus group representative was responsible for collecting all the questions from students of their corresponding sections and submitting them to the instructor two to three times a day. Responses from the instructor were either passed back to pertinent students by the focus group representatives or addressed directly with students by the instructor. This study was conducted using a fifteen-question survey after the focus group model was utilized for one month. A printed copy of the survey was distributed in the class by student investigators. Questions were of varying types, including Likert scale, yes/no, and open-ended response. One hundred forty surveys were administered, and 90 complete responses were collected. Surveys showed that 93.3% of students found that use of the focus group made them more likely to ask questions for understanding. The surveys also showed 95.5% of students found utilizing the focus group for questions allowed for better understanding of difficult concepts. General open-ended answer portions of the survey showed that most students found the focus group allowed them to ask questions more easily since they did not feel intimidated by asking in front of the whole class. No correlation was found between demographic characteristics and survey responses. This may

  1. Large-scale parallel configuration interaction. II. Two- and four-component double-group general active space implementation with application to BiH

    DEFF Research Database (Denmark)

    Knecht, Stefan; Jensen, Hans Jørgen Aagaard; Fleig, Timo

    2010-01-01

    We present a parallel implementation of a large-scale relativistic double-group configuration interaction CIprogram. It is applicable with a large variety of two- and four-component Hamiltonians. The parallel algorithm is based on a distributed data model in combination with a static load balanci...

  2. Engaging the public with low-carbon energy technologies: Results from a Scottish large group process

    International Nuclear Information System (INIS)

    Howell, Rhys; Shackley, Simon; Mabon, Leslie; Ashworth, Peta; Jeanneret, Talia

    2014-01-01

    This paper presents the results of a large group process conducted in Edinburgh, Scotland investigating public perceptions of climate change and low-carbon energy technologies, specifically carbon dioxide capture and storage (CCS). The quantitative and qualitative results reported show that the participants were broadly supportive of efforts to reduce carbon dioxide emissions, and that there is an expressed preference for renewable energy technologies to be employed to achieve this. CCS was considered in detail during the research due to its climate mitigation potential; results show that the workshop participants were cautious about its deployment. The paper discusses a number of interrelated factors which appear to influence perceptions of CCS; factors such as the perceived costs and benefits of the technology, and people's personal values and trust in others all impacted upon participants’ attitudes towards the technology. The paper thus argues for the need to provide the public with broad-based, balanced and trustworthy information when discussing CCS, and to take seriously the full range of factors that influence public perceptions of low-carbon technologies. - Highlights: • We report the results of a Scottish large group workshop on energy technologies. • There is strong public support for renewable energy and mixed opinions towards CCS. • The workshop was successful in initiating discussion around climate change and energy technologies. • Issues of trust, uncertainty, costs, benefits, values and emotions all inform public perceptions. • Need to take seriously the full range of factors that inform perceptions

  3. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  4. Ultradian activity rhythms in large groups of newly hatched chicks (Gallus gallus domesticus).

    Science.gov (United States)

    Nielsen, B L; Erhard, H W; Friggens, N C; McLeod, J E

    2008-07-01

    A clutch of young chicks housed with a mother hen exhibit ultradian (within day) rhythms of activity corresponding to the brooding cycle of the hen. In the present study clear evidence was found of ultradian activity rhythms in newly hatched domestic chicks housed in groups larger than natural clutch size without a mother hen or any other obvious external time-keeper. No consistent synchrony was found between groups housed in different pens within the same room. The ultradian rhythms disappeared with time and little evidence of group rhythmicity remained by the third night. This disappearance over time suggests that the presence of a mother hen may be pivotal for the long-term maintenance of these rhythms. The ultradian rhythm of the chicks may also play an important role in the initiation of brooding cycles during the behavioural transition of the mother hen from incubation to brooding. Computer simulations of individual activity rhythms were found to reproduce the observations made on a group basis. This was achievable even when individual chick rhythms were modelled as independent of each other, thus no assumptions of social facilitation are necessary to obtain ultradian activity rhythms on a group level.

  5. WORK GROUP DEVELOPMENT MODELS – THE EVOLUTION FROM SIMPLE GROUP TO EFFECTIVE TEAM

    Directory of Open Access Journals (Sweden)

    Raluca ZOLTAN

    2016-02-01

    Full Text Available Currently, work teams are increasingly studied by virtue of the advantages they have compared to the work groups. But a true team does not appear overnight but must complete several steps to overcome the initial stage of its existence as a group. The question that arises is at what point a simple group is turning into an effective team. Even though the development process of group into a team is not a linear process, the models found in the literature provides a rich framework for analyzing and identifying the features which group acquires over time till it become a team in the true sense of word. Thus, in this article we propose an analysis of the main models of group development in order to point out, even in a relative manner, the stage when the simple work group becomes an effective work team.

  6. Working group report: Flavor physics and model building

    Indian Academy of Sciences (India)

    cO Indian Academy of Sciences. Vol. ... This is the report of flavor physics and model building working group at ... those in model building have been primarily devoted to neutrino physics. ..... [12] Andrei Gritsan, ICHEP 2004, Beijing, China.

  7. Seasonal patterns of mixed species groups in large East African mammals.

    Directory of Open Access Journals (Sweden)

    Christian Kiffner

    Full Text Available Mixed mammal species groups are common in East African savannah ecosystems. Yet, it is largely unknown if co-occurrences of large mammals result from random processes or social preferences and if interspecific associations are consistent across ecosystems and seasons. Because species may exchange important information and services, understanding patterns and drivers of heterospecific interactions is crucial for advancing animal and community ecology. We recorded 5403 single and multi-species clusters in the Serengeti-Ngorongoro and Tarangire-Manyara ecosystems during dry and wet seasons and used social network analyses to detect patterns of species associations. We found statistically significant associations between multiple species and association patterns differed spatially and seasonally. Consistently, wildebeest and zebras preferred being associated with other species, whereas carnivores, African elephants, Maasai giraffes and Kirk's dik-diks avoided being in mixed groups. During the dry season, we found that the betweenness (a measure of importance in the flow of information or disease of species did not differ from a random expectation based on species abundance. In contrast, in the wet season, we found that these patterns were not simply explained by variations in abundances, suggesting that heterospecific associations were actively formed. These seasonal differences in observed patterns suggest that interspecific associations may be driven by resource overlap when resources are limited and by resource partitioning or anti-predator advantages when resources are abundant. We discuss potential mechanisms that could drive seasonal variation in the cost-benefit tradeoffs that underpin the formation of mixed-species groups.

  8. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  9. Dynamics of group knowledge production in facilitated modelling workshops

    DEFF Research Database (Denmark)

    Tavella, Elena; Franco, L. Alberto

    2015-01-01

    by which models are jointly developed with group members interacting face-to-face, with or without computer support. The models produced are used to inform negotiations about the nature of the issues faced by the group, and how to address them. While the facilitated modelling literature is impressive......, the workshop. Drawing on the knowledge-perspective of group communication, we conducted a micro-level analysis of a transcript of a facilitated modelling workshop held with the management team of an Alternative Food Network in the UK. Our analysis suggests that facilitated modelling interactions can take...

  10. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2004-01-01

    This paper presents a process-group-contribution Method to model. simulate and synthesize a flowsheet. The process-group based representation of a flowsheet together with a process "property" model are presented. The process-group based synthesis method is developed on the basis of the computer...... aided molecular design methods and gives the ability to screen numerous process alternatives without the need to use the rigorous process simulation models. The process "property" model calculates the design targets for the generated flowsheet alternatives while a reverse modelling method (also...... developed) determines the design variables matching the target. A simple illustrative example highlighting the main features of the methodology is also presented....

  11. A Grouping Particle Swarm Optimizer with Personal-Best-Position Guidance for Large Scale Optimization.

    Science.gov (United States)

    Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi

    2017-05-04

    Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.

  12. Synchrony and Physiological Arousal Increase Cohesion and Cooperation in Large Naturalistic Groups.

    Science.gov (United States)

    Jackson, Joshua Conrad; Jong, Jonathan; Bilkey, David; Whitehouse, Harvey; Zollmann, Stefanie; McNaughton, Craig; Halberstadt, Jamin

    2018-01-09

    Separate research streams have identified synchrony and arousal as two factors that might contribute to the effects of human rituals on social cohesion and cooperation. But no research has manipulated these variables in the field to investigate their causal - and potentially interactive - effects on prosocial behaviour. Across four experimental sessions involving large samples of strangers, we manipulated the synchronous and physiologically arousing affordances of a group marching task within a sports stadium. We observed participants' subsequent movement, grouping, and cooperation via a camera hidden in the stadium's roof. Synchrony and arousal both showed main effects, predicting larger groups, tighter clustering, and more cooperative behaviour in a free-rider dilemma. Synchrony and arousal also interacted on measures of clustering and cooperation such that synchrony only encouraged closer clustering-and encouraged greater cooperation-when paired with physiological arousal. The research helps us understand why synchrony and arousal often co-occur in rituals around the world. It also represents the first use of real-time spatial tracking as a precise and naturalistic method of simulating collective rituals.

  13. Renormalization-group flow of the effective action of cosmological large-scale structures

    CERN Document Server

    Floerchinger, Stefan

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input ...

  14. Investigating Facebook Groups through a Random Graph Model

    OpenAIRE

    Dinithi Pallegedara; Lei Pan

    2014-01-01

    Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

  15. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  16. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  17. Modelling and measurements of wakes in large wind farms

    International Nuclear Information System (INIS)

    Barthelmie, R J; Rathmann, O; Frandsen, S T; Hansen, K S; Politis, E; Prospathopoulos, J; Rados, K; Cabezon, D; Schlez, W; Phillips, J; Neubert, A; Schepers, J G; Pijl, S P van der

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve power output predictions

  18. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  19. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C.; Vignoles, Vivian L.

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias. PMID:29681878

  20. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles

    Directory of Open Access Journals (Sweden)

    Jenny Roth

    2018-04-01

    Full Text Available The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility as associative connections. The model builds on two cognitive principles, balance–congruity and imbalance–dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification depends in part on the (incompatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (incompatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  1. Group Membership, Group Change, and Intergroup Attitudes: A Recategorization Model Based on Cognitive Consistency Principles.

    Science.gov (United States)

    Roth, Jenny; Steffens, Melanie C; Vignoles, Vivian L

    2018-01-01

    The present article introduces a model based on cognitive consistency principles to predict how new identities become integrated into the self-concept, with consequences for intergroup attitudes. The model specifies four concepts (self-concept, stereotypes, identification, and group compatibility) as associative connections. The model builds on two cognitive principles, balance-congruity and imbalance-dissonance, to predict identification with social groups that people currently belong to, belonged to in the past, or newly belong to. More precisely, the model suggests that the relative strength of self-group associations (i.e., identification) depends in part on the (in)compatibility of the different social groups. Combining insights into cognitive representation of knowledge, intergroup bias, and explicit/implicit attitude change, we further derive predictions for intergroup attitudes. We suggest that intergroup attitudes alter depending on the relative associative strength between the social groups and the self, which in turn is determined by the (in)compatibility between social groups. This model unifies existing models on the integration of social identities into the self-concept by suggesting that basic cognitive mechanisms play an important role in facilitating or hindering identity integration and thus contribute to reducing or increasing intergroup bias.

  2. Large scale vibration tests on pile-group effects using blast-induced ground motion

    International Nuclear Information System (INIS)

    Katsuichirou Hijikata; Hideo Tanaka; Takayuki Hashimoto; Kazushige Fujiwara; Yuji Miyamoto; Osamu Kontani

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. Their test-structures were exactly the same. One structure had 25 steel piles and the other had 4 piles. The test pit was backfilled with sand of appropriate grain size distributions to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. Dynamic modal tests of the pile-supported structures and PS measurements of the test pit were performed before and after the vibration tests to detect changes in the natural frequencies of the soil-pile-structure systems and the soil stiffness. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1,683 cm/s 2 according to the distances between the test site and the blast areas. (authors)

  3. Vibration tests on pile-group foundations using large-scale blast excitation

    International Nuclear Information System (INIS)

    Tanaka, Hideo; Hijikata, Katsuichirou; Hashimoto, Takayuki; Fujiwara, Kazushige; Kontani, Osamu; Miyamoto, Yuji; Suzuki, Atsushi

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. One structure had 25 steel tubular piles and the other had 4 piles. The super-structures were exactly the same. The test pit was backfilled with sand of appropriate grain size distributions in order to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1683 cm/s 2 according to the distances between the test site and the blast areas. Maximum strains were 13,400 micro-strains were recorded at the pile top of the 4-pile structure, which means that these piles were subjected to yielding

  4. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  5. Psychotherapy with schizophrenics in team groups: a systems model.

    Science.gov (United States)

    Beeber, A R

    1991-01-01

    This paper focuses on the treatment of patients with schizophrenic disorders employing the Team Group model. The advantages and disadvantages of the Team Group are presented. Systems theory and principles of group development are applied as a basis for understanding the dynamics of the group in the context at the acute psychiatric unit. Particular problems encountered in treating patients with schizophrenic disorders in this setting are presented. These include: (1) issues of therapist style and technique, (2) basic psychopathology of the schizophrenic disorders, and (3) phase-specific problems associated with the dynamics of the group. Recommendations for therapist interventions are made that may better integrate these patients into the Team Group.

  6. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  7. Quantifying fish escape behaviour through large mesh panels in trawls based on catch comparision data – model development and a case study from Skagerrak In: ICES (2012) Report of the ICES-FAO Working Group on Fishing Gear Technology and Fish Behaivour (WGFTFB), 23-27 April 2012, Lorient, France

    DEFF Research Database (Denmark)

    Krag, Ludvig Ahm; Herrmann, Bent; Karlsen, Junita

    Based on catch comparison data, it is demonstrated how detailed and quantitative information about species-specific and size dependent escape behaviour in relation to a large mesh panel can be extracted. A new analytical model is developed, applied, and compared to the traditional modelling appro...

  8. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  9. Group size, grooming and fission in primates: a modeling approach based on group structure.

    Science.gov (United States)

    Sueur, Cédric; Deneubourg, Jean-Louis; Petit, Odile; Couzin, Iain D

    2011-03-21

    In social animals, fission is a common mode of group proliferation and dispersion and may be affected by genetic or other social factors. Sociality implies preserving relationships between group members. An increase in group size and/or in competition for food within the group can result in decrease certain social interactions between members, and the group may split irreversibly as a consequence. One individual may try to maintain bonds with a maximum of group members in order to keep group cohesion, i.e. proximity and stable relationships. However, this strategy needs time and time is often limited. In addition, previous studies have shown that whatever the group size, an individual interacts only with certain grooming partners. There, we develop a computational model to assess how dynamics of group cohesion are related to group size and to the structure of grooming relationships. Groups' sizes after simulated fission are compared to observed sizes of 40 groups of primates. Results showed that the relationship between grooming time and group size is dependent on how each individual attributes grooming time to its social partners, i.e. grooming a few number of preferred partners or grooming equally or not all partners. The number of partners seemed to be more important for the group cohesion than the grooming time itself. This structural constraint has important consequences on group sociality, as it gives the possibility of competition for grooming partners, attraction for high-ranking individuals as found in primates' groups. It could, however, also have implications when considering the cognitive capacities of primates. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. How the group affects the mind : A cognitive model of idea generation in groups

    NARCIS (Netherlands)

    Nijstad, Bernard A.; Stroebe, Wolfgang

    2006-01-01

    A model called search for ideas in associative memory (SIAM) is proposed to account for various research findings in the area of group idea generation. The model assumes that idea generation is a repeated search for ideas in associative memory, which proceeds in 2 stages (knowledge activation and

  11. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  12. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  13. Therapeutic Enactment: Integrating Individual and Group Counseling Models for Change

    Science.gov (United States)

    Westwood, Marvin J.; Keats, Patrice A.; Wilensky, Patricia

    2003-01-01

    The purpose of this article is to introduce the reader to a group-based therapy model known as therapeutic enactment. A description of this multimodal change model is provided by outlining the relevant background information, key concepts related to specific change processes, and the differences in this model compared to earlier psychodrama…

  14. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. An Automatic User Grouping Model for a Group Recommender System in Location-Based Social Networks

    Directory of Open Access Journals (Sweden)

    Elahe Khazaei

    2018-02-01

    Full Text Available Spatial group recommendation refers to suggesting places to a given set of users. In a group recommender system, members of a group should have similar preferences in order to increase the level of satisfaction. Location-based social networks (LBSNs provide rich content, such as user interactions and location/event descriptions, which can be leveraged for group recommendations. In this paper, an automatic user grouping model is introduced that obtains information about users and their preferences through an LBSN. The preferences of the users, proximity of the places the users have visited in terms of spatial range, users’ free days, and the social relationships among users are extracted automatically from location histories and users’ profiles in the LBSN. These factors are combined to determine the similarities among users. The users are partitioned into groups based on these similarities. Group size is the key to coordinating group members and enhancing their satisfaction. Therefore, a modified k-medoids method is developed to cluster users into groups with specific sizes. To evaluate the efficiency of the proposed method, its mean intra-cluster distance and its distribution of cluster sizes are compared to those of general clustering algorithms. The results reveal that the proposed method compares favourably with general clustering approaches, such as k-medoids and spectral clustering, in separating users into groups of a specific size with a lower mean intra-cluster distance.

  17. Using an electronic prescribing system to ensure accurate medication lists in a large multidisciplinary medical group.

    Science.gov (United States)

    Stock, Ron; Scott, Jim; Gurtel, Sharon

    2009-05-01

    Although medication safety has largely focused on reducing medication errors in hospitals, the scope of adverse drug events in the outpatient setting is immense. A fundamental problem occurs when a clinician lacks immediate access to an accurate list of the medications that a patient is taking. Since 2001, PeaceHealth Medical Group (PHMG), a multispecialty physician group, has been using an electronic prescribing system that includes medication-interaction warnings and allergy checks. Yet, most practitioners recognized the remaining potential for error, especially because there was no assurance regarding the accuracy of information on the electronic medical record (EMR)-generated medication list. PeaceHealth developed and implemented a standardized approach to (1) review and reconcile the medication list for every patient at each office visit and (2) report on the results obtained within the PHMG clinics. In 2005, PeaceHealth established the ambulatory medication reconciliation project to develop a reliable, efficient process for maintaining accurate patient medication lists. Each of PeaceHealth's five regions created a medication reconciliation task force to redesign its clinical practice, incorporating the systemwide aims and agreed-on key process components for every ambulatory visit. Implementation of the medication reconciliation process at the PHMG clinics resulted in a substantial increase in the number of accurate medication lists, with fewer discrepancies between what the patient is actually taking and what is recorded in the EMR. The PeaceHealth focus on patient safety, and particularly the reduction of medication errors, has involved a standardized approach for reviewing and reconciling medication lists for every patient visiting a physician office. The standardized processes can be replicated at other ambulatory clinics-whether or not electronic tools are available.

  18. The Beyond the standard model working group: Summary report

    Energy Technology Data Exchange (ETDEWEB)

    G. Azuelos et al.

    2004-03-18

    In this working group we have investigated a number of aspects of searches for new physics beyond the Standard Model (SM) at the running or planned TeV-scale colliders. For the most part, we have considered hadron colliders, as they will define particle physics at the energy frontier for the next ten years at least. The variety of models for Beyond the Standard Model (BSM) physics has grown immensely. It is clear that only future experiments can provide the needed direction to clarify the correct theory. Thus, our focus has been on exploring the extent to which hadron colliders can discover and study BSM physics in various models. We have placed special emphasis on scenarios in which the new signal might be difficult to find or of a very unexpected nature. For example, in the context of supersymmetry (SUSY), we have considered: how to make fully precise predictions for the Higgs bosons as well as the superparticles of the Minimal Supersymmetric Standard Model (MSSM) (parts III and IV); MSSM scenarios in which most or all SUSY particles have rather large masses (parts V and VI); the ability to sort out the many parameters of the MSSM using a variety of signals and study channels (part VII); whether the no-lose theorem for MSSM Higgs discovery can be extended to the next-to-minimal Supersymmetric Standard Model (NMSSM) in which an additional singlet superfield is added to the minimal collection of superfields, potentially providing a natural explanation of the electroweak value of the parameter {micro} (part VIII); sorting out the effects of CP violation using Higgs plus squark associate production (part IX); the impact of lepton flavor violation of various kinds (part X); experimental possibilities for the gravitino and its sgoldstino partner (part XI); what the implications for SUSY would be if the NuTeV signal for di-muon events were interpreted as a sign of R-parity violation (part XII). Our other main focus was on the phenomenological implications of extra

  19. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  20. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  1. Group-Based Active Learning of Classification Models.

    Science.gov (United States)

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  2. The Beyond the Standard Model Working Group: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, Thomas G.

    2002-08-08

    Various theoretical aspects of physics beyond the Standard Model at hadron colliders are discussed. Our focus will be on those issues that most immediately impact the projects pursued as part of the BSM group at this meeting.

  3. Modelling comonotonic group-life under dependent decrement causes

    OpenAIRE

    Wang, Dabuxilatu

    2011-01-01

    Comonotonicity had been a extreme case of dependency between random variables. This article consider an extension of single life model under multiple dependent decrement causes to the case of comonotonic group-life.

  4. Veal calves’ clinical/health status in large groups fed with automatic feeding devices

    Directory of Open Access Journals (Sweden)

    Giulio Cozzi

    2010-01-01

    Full Text Available Aim of the current study was to evaluate the clinical/health status of veal calves in 3 farms that adopt large group housing and automatic feeding stations in Italy. Visits were scheduled in three phases of the rearing cycle (early, middle, and end. Results showed a high incidence of coughing, skin infection and bloated rumen particularly in the middle phase while cross-sucking signs were present at the early stage when calves’ nibbling proclivity is still high. Throughout the rearing cycle, the frequency of bursitis increased reaching 53% of calves at the end. The percentage of calves with a poorer body condition than the mid-range of the batch raised gradually as well, likely due to the non-proportioned teat/calves ratio that increases competition for feed and reduces milk intake of the low ranking animals. The remarked growth differences among pen-mates and the mortality rate close to 7% showed by the use of automatic feeding devices for milk delivery seem not compensating the lower labour demand, therefore its sustainability at the present status is doubtful both for the veal calves’ welfare and the farm incomes.

  5. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  6. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  7. Mathematical modeling of large floating roof reservoir temperature arena

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2018-03-01

    Full Text Available The current study is a simplification of related components of large floating roof tank and modeling for three dimensional temperature field of large floating roof tank. The heat transfer involves its transfer between the hot fluid in the oil tank, between the hot fluid and the tank wall and between the tank wall and the external environment. The mathematical model of heat transfer and flow of oil in the tank simulates the temperature field of oil in tank. Oil temperature field of large floating roof tank is obtained by numerical simulation, map the curve of central temperature dynamics with time and analyze axial and radial temperature of storage tank. It determines the distribution of low temperature storage tank location based on the thickness of the reservoir temperature. Finally, it compared the calculated results and the field test data; eventually validated the calculated results based on the experimental results.

  8. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  9. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  10. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  11. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  12. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  13. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  14. Large psub(T) pion production and clustered parton model

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1977-05-01

    Recent experimental results on the large p sub(T) inclusive ..pi../sup 0/ productions by pp and ..pi..p collisions are interpreted by the parton model in which the constituent quarks are defined to be the clusters of the quark-partons and gluons.

  15. Verifying large SDL-specifications using model checking

    NARCIS (Netherlands)

    Sidorova, N.; Steffen, M.; Reed, R.; Reed, J.

    2001-01-01

    In this paper we propose a methodology for model-checking based verification of large SDL specifications. The methodology is illustrated by a case study of an industrial medium-access protocol for wireless ATM. To cope with the state space explosion, the verification exploits the layered and modular

  16. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  17. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  18. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  19. Deciphering the crowd: modeling and identification of pedestrian group motion.

    Science.gov (United States)

    Yücel, Zeynep; Zanlungo, Francesco; Ikeda, Tetsushi; Miyashita, Takahiro; Hagita, Norihiro

    2013-01-14

    Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  20. Deciphering the Crowd: Modeling and Identification of Pedestrian Group Motion

    Directory of Open Access Journals (Sweden)

    Norihiro Hagita

    2013-01-01

    Full Text Available Associating attributes to pedestrians in a crowd is relevant for various areas like surveillance, customer profiling and service providing. The attributes of interest greatly depend on the application domain and might involve such social relations as friends or family as well as the hierarchy of the group including the leader or subordinates. Nevertheless, the complex social setting inherently complicates this task. We attack this problem by exploiting the small group structures in the crowd. The relations among individuals and their peers within a social group are reliable indicators of social attributes. To that end, this paper identifies social groups based on explicit motion models integrated through a hypothesis testing scheme. We develop two models relating positional and directional relations. A pair of pedestrians is identified as belonging to the same group or not by utilizing the two models in parallel, which defines a compound hypothesis testing scheme. By testing the proposed approach on three datasets with different environmental properties and group characteristics, it is demonstrated that we achieve an identification accuracy of 87% to 99%. The contribution of this study lies in its definition of positional and directional relation models, its description of compound evaluations, and the resolution of ambiguities with our proposed uncertainty measure based on the local and global indicators of group relation.

  1. Investigating the LGBTQ Responsive Model for Supervision of Group Work

    Science.gov (United States)

    Luke, Melissa; Goodrich, Kristopher M.

    2013-01-01

    This article reports an investigation of the LGBTQ Responsive Model for Supervision of Group Work, a trans-theoretical supervisory framework to address the needs of lesbian, gay, bisexual, transgender, and questioning (LGBTQ) persons (Goodrich & Luke, 2011). Findings partially supported applicability of the LGBTQ Responsive Model for Supervision…

  2. Pile group program for full material modeling and progressive failure.

    Science.gov (United States)

    2008-12-01

    Strain wedge (SW) model formulation has been used, in previous work, to evaluate the response of a single pile or a group of piles (including its : pile cap) in layered soils to lateral loading. The SW model approach provides appropriate prediction f...

  3. A Creative Therapies Model for the Group Supervision of Counsellors.

    Science.gov (United States)

    Wilkins, Paul

    1995-01-01

    Sets forth a model of group supervision, drawing on a creative therapies approach which provides an effective way of delivering process issues, conceptualization issues, and personalization issues. The model makes particular use of techniques drawn from art therapy and from psychodrama, and should be applicable to therapists of many orientations.…

  4. Loop groups, the Luttinger model, anyons, and Sutherland systems

    International Nuclear Information System (INIS)

    Langmann, E.; Carey, A.L.

    1998-01-01

    We discuss the representation theory of loop groups and examples of how it is used in physics. These examples include the construction and solution of the Luttinger model and other 1 + 1-dimensional interacting quantum field theories, the construction of anyon field operators on the circle, and the '2 nd quantization' of the Sutherland model using anyons

  5. What is special about the group of the standard model?

    International Nuclear Information System (INIS)

    Nielsen, H.B.; Brene, N.

    1989-03-01

    The standard model is based on the algebra of U 1 xSU 2 xSU 3 . The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U 2 xU 3 ). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very 'skew'. By this we mean that the group has relatively few 'generalised outer automorphisms'. One may speculate about physical reasons for this fact. (orig.)

  6. What is special about the group of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, H.B.; Brene, N.

    1989-06-15

    The standard model is based on the algebra of U/sub 1/xSU/sub 2/xSU/sub 3/. The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U/sub 2/xU/sub 3/). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very ''skew''. By this we mean that the group has relatively few ''generalised outer automorphisms''. One may speculate about physical reasons for this fact. (orig.).

  7. What is special about the group of the standard model?

    Science.gov (United States)

    Nielsen, H. B.; Brene, N.

    1989-06-01

    The standard model is based on the algebra of U 1×SU 2×SU 3. The systematics of charges of the fundamental fermions seems to suggest the importance of a particular group having this algebra, viz. S(U 2×U 3). This group is distinguished from all other connected compact non semisimple groups with dimensionality up to 12 by a characteristic property: it is very “skew”. By this we mean that the group has relatively few “generalised outer automorphisms”. One may speculate about physical reasons for this fact.

  8. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  9. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  10. A model of interaction between anticorruption authority and corruption groups

    International Nuclear Information System (INIS)

    Neverova, Elena G.; Malafeyef, Oleg A.

    2015-01-01

    The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game

  11. A model of interaction between anticorruption authority and corruption groups

    Energy Technology Data Exchange (ETDEWEB)

    Neverova, Elena G.; Malafeyef, Oleg A. [Saint-Petersburg State University, Saint-Petersburg, Russia, 35, Universitetskii prospekt, Petrodvorets, 198504 Email:elenaneverowa@gmail.com, malafeyevoa@mail.ru (Russian Federation)

    2015-03-10

    The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.

  12. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  13. Spatial associations between socioeconomic groups and NO2 air pollution exposure within three large Canadian cities.

    Science.gov (United States)

    Pinault, Lauren; Crouse, Daniel; Jerrett, Michael; Brauer, Michael; Tjepkema, Michael

    2016-05-01

    Previous studies of environmental justice in Canadian cities have linked lower socioeconomic status to greater air pollution exposures at coarse geographic scales, (i.e., Census Tracts). However, studies that examine these associations at finer scales are less common, as are comparisons among cities. To assess differences in exposure to air pollution among socioeconomic groups, we assigned estimates of exposure to ambient nitrogen dioxide (NO2), a marker for traffic-related pollution, from city-wide land use regression models to respondents of the 2006 Canadian census long-form questionnaire in Toronto, Montreal, and Vancouver. Data were aggregated at a finer scale than in most previous studies (i.e., by Dissemination Area (DA), which includes approximately 400-700 persons). We developed simultaneous autoregressive (SAR) models, which account for spatial autocorrelation, to identify associations between NO2 exposure and indicators of social and material deprivation. In Canada's three largest cities, DAs with greater proportions of tenants and residents who do not speak either English or French were characterised by greater exposures to ambient NO2. We also observed positive associations between NO2 concentrations and indicators of social deprivation, including the proportion of persons living alone (in Toronto), and the proportion of persons who were unmarried/not in a common-law relationship (in Vancouver). Other common measures of deprivation (e.g., lone-parent families, unemployment) were not associated with NO2 exposures. DAs characterised by selected indicators of deprivation were associated with higher concentrations of ambient NO2 air pollution in the three largest cities in Canada. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  14. Does the interpersonal model apply across eating disorder diagnostic groups? A structural equation modeling approach.

    Science.gov (United States)

    Ivanova, Iryna V; Tasca, Giorgio A; Proulx, Geneviève; Bissada, Hany

    2015-11-01

    Interpersonal model has been validated with binge-eating disorder (BED), but it is not yet known if the model applies across a range of eating disorders (ED). The goal of this study was to investigate the validity of the interpersonal model in anorexia nervosa (restricting type; ANR and binge-eating/purge type; ANBP), bulimia nervosa (BN), BED, and eating disorder not otherwise specified (EDNOS). Data from a cross-sectional sample of 1459 treatment-seeking women diagnosed with ANR, ANBP, BN, BED and EDNOS were examined for indirect effects of interpersonal problems on ED psychopathology mediated through negative affect. Findings from structural equation modeling demonstrated the mediating role of negative affect in four of the five diagnostic groups. There were significant, medium to large (.239, .558), indirect effects in the ANR, BN, BED and EDNOS groups but not in the ANBP group. The results of the first reverse model of interpersonal problems as a mediator between negative affect and ED psychopathology were nonsignificant, suggesting the specificity of these hypothesized paths. However, in the second reverse model ED psychopathology was related to interpersonal problems indirectly through negative affect. This is the first study to find support for the interpersonal model of ED in a clinical sample of women with diverse ED diagnoses, though there may be a reciprocal relationship between ED psychopathology and relationship problems through negative affect. Negative affect partially explains the relationship between interpersonal problems and ED psychopathology in women diagnosed with ANR, BN, BED and EDNOS. Interpersonal psychotherapies for ED may be addressing the underlying interpersonal-affective difficulties, thereby reducing ED psychopathology. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. New Pathways between Group Theory and Model Theory

    CERN Document Server

    Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz

    2017-01-01

    This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...

  16. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  17. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  18. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    Science.gov (United States)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  19. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  20. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  1. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  2. Achieving 90% Adoption of Clinical Practice Guidelines Using the Delphi Consensus Method in a Large Orthopedic Group.

    Science.gov (United States)

    Bini, Stefano A; Mahajan, John

    2016-11-01

    Little is known about the implementation rate of clinical practice guidelines (CPGs). Our purpose was to report on the adoption rate of CPGs created and implemented by a large orthopedic group using the Delphi consensus method. The draft CPGs were created before the group's annual meeting by 5 teams each assigned a subset of topics. The draft guidelines included a statement and a summary of the available evidence. Each guideline was debated in both small-group and plenary sessions. Voting was anonymous and a 75% supermajority was required for passage. A Likert scale was used to survey the patient's experience with the process at 1 week, and the Kirkpatrick evaluation model was used to gauge the efficacy of the process over a 6-month time frame. Eighty-five orthopedic surgeons attended the meeting. Fifteen guidelines grouped into 5 topics were created. All passed. Eighty-six percent of attendees found the process effective and 84% felt that participating in the process made it more likely that they would adopt the guidelines. At 1 week, an average of 62% of attendees stated they were practicing the guideline as written (range: 35%-72%), and at 6 months, 96% stated they were practicing them (range: 82%-100%). We have demonstrated that a modified Delphi method for reaching consensus can be very effective in both creating CPGs and leading to their adoption. Further we have shown that the process is well received by participants and that an inclusionary approach can be highly successful. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Wave propagation model of heat conduction and group speed

    Science.gov (United States)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  4. Mechanical test of the model coil wound with large conductor

    International Nuclear Information System (INIS)

    Hiue, Hisaaki; Sugimoto, Makoto; Nakajima, Hideo; Yasukawa, Yukio; Yoshida, Kiyoshi; Hasegawa, Mitsuru; Ito, Ikuo; Konno, Masayuki.

    1992-09-01

    The high rigidity and strength of the winding pack are required to realize the large superconducting magnet for the fusion reactor. This paper describes mechanical tests concerning the rigidity of the winding pack. Samples were prepared to evaluate the adhesive strength between conductors and insulators. Epoxy and Bismaleimide-Triazine resin (BT resin) were used as the conductor insulator. The stainless steel (SS) 304 bars, whose surface was treated mechanically and chemically, was applied to the modeled conductor. The model coil was would with the model conductors covered with the insulator by grand insulator. A winding model combining 3 x 3 conductors was produced for measuring shearing rigidity. The sample was loaded with pure shearing force at the LN 2 temperature. The bar winding sample, by 8 x 6 conductors, was measured the bending rigidity. These three point bending tests were carried out at room temperature. The pancake winding sample was loaded with compressive forces to measure compressive rigidity of winding. (author)

  5. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  6. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  7. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  8. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  9. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  10. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  11. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  12. Small groups, large profits: Calculating interest rates in community-managed microfinance

    DEFF Research Database (Denmark)

    Rasmussen, Ole Dahl

    2012-01-01

    Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups...... in Malawi, I show that the right figure is likely to be at least twice this figure. For these groups, the annual return is 62%. The difference comes from sector-wide application of a non-standard interest rate calculations and unrealistic assumptions about the savings profile in the groups. As a result......, it is impossible to compare returns in savings groups with returns elsewhere. Moreover, the interest on savings is incomparable to the interest rate on loans. I argue for the use of a standardized comparable metric and suggest easy ways to implement it. Developments of new tools and standard along these lines...

  13. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  14. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  15. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  16. Hidden Markov models for the activity profile of terrorist groups

    OpenAIRE

    Raghavan, Vasanthan; Galstyan, Aram; Tartakovsky, Alexander G.

    2012-01-01

    The main focus of this work is on developing models for the activity profile of a terrorist group, detecting sudden spurts and downfalls in this profile, and, in general, tracking it over a period of time. Toward this goal, a $d$-state hidden Markov model (HMM) that captures the latent states underlying the dynamics of the group and thus its activity profile is developed. The simplest setting of $d=2$ corresponds to the case where the dynamics are coarsely quantized as Active and Inactive, re...

  17. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  18. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  19. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  20. Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling

    Science.gov (United States)

    Campbell, Kenyth

    2012-01-01

    The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.

  1. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... consisting of a two-zone combustion and NOx emission model, a double Wiebe heat release model, the Redlich-Kwong equation of state and the Woschni heat loss correlation. A novel methodology is presented and used to determine the optimum organic Rankine cycle process layout, working fluid and process......, are evaluated with regards to the fuel consumption and NOx emissions trade-off. The results of the calibration and validation of the engine model suggest that the main performance parameters can be predicted with adequate accuracies for the overall purpose. The results of the ORC and the Kalina cycle...

  2. Modelling animal group fission using social network dynamics.

    Directory of Open Access Journals (Sweden)

    Cédric Sueur

    Full Text Available Group life involves both advantages and disadvantages, meaning that individuals have to compromise between their nutritional needs and their social links. When a compromise is impossible, the group splits in order to reduce conflict of interests and favour positive social interactions between its members. In this study we built a dynamic model of social networks to represent a succession of temporary fissions involving a change in social relations that could potentially lead to irreversible group fission (i.e. no more group fusion. This is the first study that assesses how a social network changes according to group fission-fusion dynamics. We built a model that was based on different parameters: the group size, the influence of nutritional needs compared to social needs, and the changes in the social network after a temporary fission. The results obtained from this theoretical data indicate how the percentage of social relation transfer, the number of individuals and the relative importance of nutritional requirements and social links influence the average number of days before irreversible fission occurs. The greater the nutritional needs and the higher the transfer of social relations during temporary fission, the fewer days will be observed before an irreversible fission. It is crucial to bridge the gap between the individual and the population level if we hope to understand how simple, local interactions may drive ecological systems.

  3. Network formation under heterogeneous costs: The multiple group model

    NARCIS (Netherlands)

    Kamphorst, J.J.A.; van der Laan, G.

    2007-01-01

    It is widely recognized that the shape of networks influences both individual and aggregate behavior. This raises the question which types of networks are likely to arise. In this paper we investigate a model of network formation, where players are divided into groups and the costs of a link between

  4. Migdal-Kadanoff renormalization group for the Z(5) model

    International Nuclear Information System (INIS)

    Baltar, V.L.V.; Carneiro, G.M.; Pol, M.E.; Zagury, N.

    1984-01-01

    The Migdal-Kadanoff renormalization group methods is used to calculate the phase diagram of the AF Z(5) model. It is found that this scheme simulates a fixed line which it is interpreted as the locus of attraction of a critical phase. This result is in reasonable agreement with the predictions of Monte Carlo simulations. (Author) [pt

  5. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  6. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  7. Distribution of ABO blood groups and rhesus factor in a Large Scale ...

    African Journals Online (AJOL)

    J. Torabizade maatoghi

    2015-08-20

    Aug 20, 2015 ... Aim of the study: Due to the presence of various ethnic groups in Khuzestan province, several ... In the studies conducted in USA, southwest Saudi Arabia, .... The authors declare that there is no conflict of interests regard-.

  8. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  9. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  11. Using Facebook Groups to Encourage Science Discussions in a Large-Enrollment Biology Class

    Science.gov (United States)

    Pai, Aditi; McGinnis, Gene; Bryant, Dana; Cole, Megan; Kovacs, Jennifer; Stovall, Kyndra; Lee, Mark

    2017-01-01

    This case study reports the instructional development, impact, and lessons learned regarding the use of Facebook as an educational tool within a large enrollment Biology class at Spelman College (Atlanta, GA). We describe the use of this social networking site to (a) engage students in active scientific discussions, (b) build community within the…

  12. Report of the Working Group on Large-Scale Computing in Aeronautics.

    Science.gov (United States)

    1984-06-01

    function and the use of drawings. In the hardware area, comtemporary large computer installations are quite powerful in terms of speed of computation as...critical to the competitive advantage of that member. He might then be willing to make them available to less advanced members under some business

  13. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  15. Support groups for children in alternate care: a largely untapped therapeutic resource.

    Science.gov (United States)

    Mellor, D; Storer, S

    1995-01-01

    Children in alternate care often have adjustment problems that manifest in various aspects of their lives. Individual therapy is often assumed to be the desired intervention, but resources seldom permit one-to-one therapy for these disturbances. The authors argue that groupwork should be considered as a possible treatment of choice. Not only is it likely to be more economical than individual therapy, it has the inherent advantage of telling children in care that they are not alone, and that other children have similar experiences and feelings. It also allows them to develop their own support network. Such groups appear to have been underutilized in work with children in out-of-home care. This article describes such a group and its outcome. Various techniques were developed to achieve specified aims. The techniques appeared to be successful. Further work on such groups and more specific evaluation is called for.

  16. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  17. Discriminative latent models for recognizing contextual group activities.

    Science.gov (United States)

    Lan, Tian; Wang, Yang; Yang, Weilong; Robinovitch, Stephen N; Mori, Greg

    2012-08-01

    In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.

  18. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  19. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  20. Topological Poisson Sigma models on Poisson-Lie groups

    International Nuclear Information System (INIS)

    Calvo, Ivan; Falceto, Fernando; Garcia-Alvarez, David

    2003-01-01

    We solve the topological Poisson Sigma model for a Poisson-Lie group G and its dual G*. We show that the gauge symmetry for each model is given by its dual group that acts by dressing transformations on the target. The resolution of both models in the open geometry reveals that there exists a map from the reduced phase of each model (P and P*) to the main symplectic leaf of the Heisenberg double (D 0 ) such that the symplectic forms on P, P* are obtained as the pull-back by those maps of the symplectic structure on D 0 . This uncovers a duality between P and P* under the exchange of bulk degrees of freedom of one model with boundary degrees of freedom of the other one. We finally solve the Poisson Sigma model for the Poisson structure on G given by a pair of r-matrices that generalizes the Poisson-Lie case. The Hamiltonian analysis of the theory requires the introduction of a deformation of the Heisenberg double. (author)

  1. Graphs of groups on surfaces interactions and models

    CERN Document Server

    White, AT

    2001-01-01

    The book, suitable as both an introductory reference and as a text book in the rapidly growing field of topological graph theory, models both maps (as in map-coloring problems) and groups by means of graph imbeddings on sufaces. Automorphism groups of both graphs and maps are studied. In addition connections are made to other areas of mathematics, such as hypergraphs, block designs, finite geometries, and finite fields. There are chapters on the emerging subfields of enumerative topological graph theory and random topological graph theory, as well as a chapter on the composition of English

  2. Bismut's way of the Malliavin calculus for large order generators on a Lie group

    Science.gov (United States)

    Léandre, Rémi

    2018-01-01

    We adapt Bismut's mechanism of the Malliavin Calculus to right invariant big order generator on a Lie group. We use deeply the symmetry in order to avoid the use of the Malliavin matrix. As an application, we deduce logarithmic estimates in small time of the heat kernel.

  3. Distribution of ABO blood groups and rhesus factor in a Large Scale ...

    African Journals Online (AJOL)

    Background: The demand for blood and blood products has increased due to advances in medical science, population growth and increased life expectancy. This has increased the need for various blood groups in Khuzestan province because of the higher incidence of thalassemia and other blood transfusion dependent ...

  4. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  5. Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.

    OpenAIRE

    Sporns, O; Tononi, G; Edelman, G M

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model ...

  6. Group-kinetic theory and modeling of atmospheric turbulence

    Science.gov (United States)

    Tchen, C. M.

    1989-01-01

    A group kinetic method is developed for analyzing eddy transport properties and relaxation to equilibrium. The purpose is to derive the spectral structure of turbulence in incompressible and compressible media. Of particular interest are: direct and inverse cascade, boundary layer turbulence, Rossby wave turbulence, two phase turbulence; compressible turbulence, and soliton turbulence. Soliton turbulence can be found in large scale turbulence, turbulence connected with surface gravity waves and nonlinear propagation of acoustical and optical waves. By letting the pressure gradient represent the elementary interaction among fluid elements and by raising the Navier-Stokes equation to higher dimensionality, the master equation was obtained for the description of the microdynamical state of turbulence.

  7. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  8. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  9. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  10. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  11. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  12. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  13. The monster sporadic group and a theory underlying superstring models

    International Nuclear Information System (INIS)

    Chapline, G.

    1996-09-01

    The pattern of duality symmetries acting on the states of compactified superstring models reinforces an earlier suggestion that the Monster sporadic group is a hidden symmetry for superstring models. This in turn points to a supersymmetric theory of self-dual and anti-self-dual K3 manifolds joined by Dirac strings and evolving in a 13 dimensional spacetime as the fundamental theory. In addition to the usual graviton and dilaton this theory contains matter-like degrees of freedom resembling the massless states of the heterotic string, thus providing a completely geometric interpretation for ordinary matter. 25 refs

  14. A model for amalgamation in group decision making

    Science.gov (United States)

    Cutello, Vincenzo; Montero, Javier

    1992-01-01

    In this paper we present a generalization of the model proposed by Montero, by allowing non-complete fuzzy binary relations for individuals. A degree of unsatisfaction can be defined in this case, suggesting that any democratic aggregation rule should take into account not only ethical conditions or some degree of rationality in the amalgamating procedure, but also a minimum support for the set of alternatives subject to the group analysis.

  15. The Large Deployable Reflector (LDR) report of the Science Coordination Group

    Science.gov (United States)

    1986-01-01

    The Large Deployable Reflector (LDR) is a telescope designed to carry out high-angular resolution, high-sensitivity observations at far-infrared and submillimeter wavelengths. The scientific rationale for the LDR is discussed in light of the recent Infrared Astronomical Satellite (IRAS) and Kuiper Airborne Observatory (KAO) results and the several new ground-based observatories planned for the late 1980s. The importance of high sensitivity and high angular resolution observations from space in the submillimeter region is stressed. The scientific and technical problems of using the LDR in a light bucket mode at approx. less than 5 microns and in designing the LDR as an unfilled aperture with subarcsecond resolution are also discussed. The need for an aperture as large as 20 m is established, along with the requirements of beam-shape stability, spatial chopping, thermal control, and surface figure stability. The instrument complement required to cover the wavelength-spectral resolution region of interest to the LDR is defined.

  16. Study on dynamic multi-objective approach considering coal and water conflict in large scale coal group

    Science.gov (United States)

    Feng, Qing; Lu, Li

    2018-01-01

    In the process of coal mining, destruction and pollution of groundwater in has reached an imminent time, and groundwater is not only related to the ecological environment, but also affect the health of human life. Similarly, coal and water conflict is still one of the world's problems in large scale coal mining regions. Based on this, this paper presents a dynamic multi-objective optimization model to deal with the conflict of the coal and water in the coal group with multiple subordinate collieries and arrive at a comprehensive arrangement to achieve environmentally friendly coal mining strategy. Through calculation, this paper draws the output of each subordinate coal mine. And on this basis, we continue to adjust the environmental protection parameters to compare the coal production at different collieries at different stages under different attitude of the government. At last, the paper conclude that, in either case, it is the first arrangement to give priority to the production of low-drainage, high-yield coal mines.

  17. Correlates of sedentary time in different age groups: results from a large cross sectional Dutch survey.

    Science.gov (United States)

    Bernaards, Claire M; Hildebrandt, Vincent H; Hendriksen, Ingrid J M

    2016-10-26

    Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA). The aim of the study was to identify correlates of sedentary time (ST) in different age groups and day types (i.e. school-/work day versus non-school-/non-work day). The study sample consisted of 1895 Dutch children (4-11 years), 1131 adolescents (12-17 years), 8003 adults (18-64 years) and 1569 elderly (65 years and older) who enrolled in the Dutch continuous national survey 'Injuries and Physical Activity in the Netherlands' between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly), male gender (adults), overweight (children), higher education (adults ≥ 30 years), urban environment (adults), chronic disease (adults ≥ 30 years), sedentary work (adults), not meeting the moderate to vigorous PA (MVPA) guideline (children and adults ≥ 30 years) and not meeting the vigorous PA (VPA) guideline (4-to-17-year-olds). Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of the relationship between identified correlates and ST.

  18. Correlates of sedentary time in different age groups: results from a large cross sectional Dutch survey

    Directory of Open Access Journals (Sweden)

    Claire M. Bernaards

    2016-10-01

    Full Text Available Abstract Background Evidence shows that prolonged sitting is associated with an increased risk of mortality, independent of physical activity (PA. The aim of the study was to identify correlates of sedentary time (ST in different age groups and day types (i.e. school-/work day versus non-school-/non-work day. Methods The study sample consisted of 1895 Dutch children (4–11 years, 1131 adolescents (12–17 years, 8003 adults (18–64 years and 1569 elderly (65 years and older who enrolled in the Dutch continuous national survey ‘Injuries and Physical Activity in the Netherlands’ between 2006 and 2011. Respondents estimated the number of sitting hours during a regular school-/workday and a regular non-school/non-work day. Multiple linear regression analyses on cross-sectional data were used to identify correlates of ST. Results Significant positive associations with ST were observed for: higher age (4-to-17-year-olds and elderly, male gender (adults, overweight (children, higher education (adults ≥ 30 years, urban environment (adults, chronic disease (adults ≥ 30 years, sedentary work (adults, not meeting the moderate to vigorous PA (MVPA guideline (children and adults ≥ 30 years and not meeting the vigorous PA (VPA guideline (4-to-17-year-olds. Correlates of ST that significantly differed between day types were working hours and meeting the VPA guideline. More working hours were associated with more ST on school-/work days. In children and adolescents, meeting the VPA guideline was associated with less ST on non-school/non-working days only. Conclusions This study provides new insights in the correlates of ST in different age groups and thus possibilities for interventions in these groups. Correlates of ST appear to differ between age groups and to a lesser degree between day types. This implies that interventions to reduce ST should be age specific. Longitudinal studies are needed to draw conclusions on causality of

  19. Use of New Methodologies for Students Assessment in Large Groups in Engineering Education

    Directory of Open Access Journals (Sweden)

    B. Tormos

    2014-03-01

    Full Text Available In this paper, a student evaluation methodology which applies the concept of continuous assessment proposed by Bologna is presented for new degrees in higher education. An important part of the student's final grade is based on the performance of several individual works throughout the semester. The paper shows the correction system used which is based on using a spreadsheet with macros and a template in which the student provides the solution of each task. The employ of this correction system together with the available e-learning platform allows the teachers to perform automatic tasks evaluations compatible with courses with large number of students. The paper also raises the different solutions adopted to avoid plagiarism and to try that the final grade reflects, as closely as possible, the knowledge acquired by the students.

  20. Large-dimension configuration-interaction calculations of positron binding to the group-II atoms

    International Nuclear Information System (INIS)

    Bromley, M. W. J.; Mitroy, J.

    2006-01-01

    The configuration-interaction (CI) method is applied to the calculation of the structures of a number of positron binding systems, including e + Be, e + Mg, e + Ca, and e + Sr. These calculations were carried out in orbital spaces containing about 200 electron and 200 positron orbitals up to l=12. Despite the very large dimensions, the binding energy and annihilation rate converge slowly with l, and the final values do contain an appreciable correction obtained by extrapolating the calculation to the l→∞ limit. The binding energies were 0.00317 hartree for e + Be, 0.0170 hartree for e + Mg, 0.0189 hartree for e + Ca, and 0.0131 hartree for e + Sr

  1. Renormalization group analysis of a simple hierarchical fermion model

    International Nuclear Information System (INIS)

    Dorlas, T.C.

    1991-01-01

    A simple hierarchical fermion model is constructed which gives rise to an exact renormalization transformation in a 2-dimensional parameter space. The behaviour of this transformation is studied. It has two hyperbolic fixed points for which the existence of a global critical line is proven. The asymptotic behaviour of the transformation is used to prove the existence of the thermodynamic limit in a certain domain in parameter space. Also the existence of a continuum limit for these theories is investigated using information about the asymptotic renormalization behaviour. It turns out that the 'trivial' fixed point gives rise to a two-parameter family of continuum limits corresponding to that part of parameter space where the renormalization trajectories originate at this fixed point. Although the model is not very realistic it serves as a simple example of the appliclation of the renormalization group to proving the existence of the thermodynamic limit and the continuum limit of lattice models. Moreover, it illustrates possible complications that can arise in global renormalization group behaviour, and that might also be present in other models where no global analysis of the renormalization transformation has yet been achieved. (orig.)

  2. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  3. Improving large-scale groundwater models by considering fossil gradients

    Science.gov (United States)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  4. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  5. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  6. Social Transmission of False Memory in Small Groups and Large Networks.

    Science.gov (United States)

    Maswood, Raeya; Rajaram, Suparna

    2018-05-21

    Sharing information and memories is a key feature of social interactions, making social contexts important for developing and transmitting accurate memories and also false memories. False memory transmission can have wide-ranging effects, including shaping personal memories of individuals as well as collective memories of a network of people. This paper reviews a collection of key findings and explanations in cognitive research on the transmission of false memories in small groups. It also reviews the emerging experimental work on larger networks and collective false memories. Given the reconstructive nature of memory, the abundance of misinformation in everyday life, and the variety of social structures in which people interact, an understanding of transmission of false memories has both scientific and societal implications. © 2018 Cognitive Science Society, Inc.

  7. Large-scale climate variation modifies the winter grouping behavior of endangered Indiana bats

    Science.gov (United States)

    Thogmartin, Wayne E.; McKann, Patrick C.

    2014-01-01

    Power laws describe the functional relationship between 2 quantities, such as the frequency of a group as the multiplicative power of group size. We examined whether the annual size of well-surveyed wintering populations of endangered Indiana bats (Myotis sodalis) followed a power law, and then leveraged this relationship to predict whether the aggregation of Indiana bats in winter was influenced by global climate processes. We determined that Indiana bat wintering populations were distributed according to a power law (mean scaling coefficient α = −0.44 [95% confidence interval {95% CI} = −0.61, −0.28). The antilog of these annual scaling coefficients ranged between 0.67 and 0.81, coincident with the three-fourths power found in many other biological phenomena. We associated temporal patterns in the annual (1983–2011) scaling coefficient with the North Atlantic Oscillation (NAO) index in August (βNAOAugust = −0.017 [90% CI = −0.032, −0.002]), when Indiana bats are deciding when and where to hibernate. After accounting for the strong effect of philopatry to habitual wintering locations, Indiana bats aggregated in larger wintering populations during periods of severe winter and in smaller populations in milder winters. The association with August values of the NAO indicates that bats anticipate future winter weather conditions when deciding where to roost, a heretofore unrecognized role for prehibernation swarming behavior. Future research is needed to understand whether the three-fourths–scaling patterns we observed are related to scaling in metabolism.

  8. Fish populations in a large group of acid-stressed lakes

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, H H

    1975-01-01

    The purpose of this study was to determine the effects of environmental stress on the number and diversity of fish species in a group of acid-stressed lakes. The study area was the La Cloche Mountains, a series of quartzite ridges covering 1,300 km/sup 2/ along the north shore of Georgian Bay and north channel of Lake Huron. Within these ridges are 173 lakes; 68 of the largest of these made up the study sample. The lakes of the La Cloche Mountains are undergoing rapid acidification. Coincident with this there has been the loss of sport fishes from several lakes. Lakes such as Nellie, Lumsden, O.S.A., Acid and Killarney supported good sport fisheries for the lake trout, (Salvelinus namaycush) for many years, but have ceased to do so in the last 5 to 15 years. Other sport fishes, notably the walleye (Stizostedion vitreum) and smallmouth bass (micropterus dolomieu) have disappeared from some of the La Cloche Lakes. Thus recreational fishing alone could not have been the cause of the change. Beamish (1974) recorded the extreme sparcity of the three remaining fish species in O.S.A. Lake. Many of the lakes of the La Cloche mountains are accessible only with difficulty and little or no information exists for these lakes prior to this study. This precluded simple comparison of these lakes before and during acidification. This lack of historic data determined in part the approach taken in this study; a comparison of the fish communities of a group of lakes differing in degree of acid stress.

  9. Modeling the behaviour of shape memory materials under large deformations

    Science.gov (United States)

    Rogovoy, A. A.; Stolbova, O. S.

    2017-06-01

    In this study, the models describing the behavior of shape memory alloys, ferromagnetic materials and polymers have been constructed, using a formalized approach to develop the constitutive equations for complex media under large deformations. The kinematic and constitutive equations, satisfying the principles of thermodynamics and objectivity, have been derived. The application of the Galerkin procedure to the systems of equations of solid mechanics allowed us to obtain the Lagrange variational equation and variational formulation of the magnetostatics problems. These relations have been tested in the context of the problems of finite deformation in shape memory alloys and ferromagnetic materials during forward and reverse martensitic transformations and in shape memory polymers during forward and reverse relaxation transitions from a highly elastic to a glassy state.

  10. A large animal model for boron neutron capture therapy

    International Nuclear Information System (INIS)

    Gavin, P.R.; Kraft, S.L.; DeHaan, C.E.; Moore, M.P.; Griebenow, M.L.

    1992-01-01

    An epithermal neutron beam is needed to treat relatively deep seated tumors. The scattering characteristics of neutrons in this energy range dictate that in vivo experiments be conducted in a large animal to prevent unacceptable total body irradiation. The canine species has proven an excellent model to evaluate the various problems of boron neutron capture utilizing an epithermal neutron beam. This paper discusses three major components of the authors study: (1) the pharmacokinetics of borocaptate sodium (NA 2 B 12 H 11 SH or BSH) in dogs with spontaneously occurring brain tumors, (2) the radiation tolerance of normal tissues in the dog using an epithermal beam alone and in combination with borocaptate sodium, and (3) initial treatment of dogs with spontaneously occurring brain tumors utilizing borocaptate sodium and an epithermal neutron beam

  11. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  12. High Luminosity Large Hadron Collider A description for the European Strategy Preparatory Group

    CERN Document Server

    Rossi, L

    2012-01-01

    The Large Hadron Collider (LHC) is the largest scientific instrument ever built. It has been exploring the new energy frontier since 2009, gathering a global user community of 7,000 scientists. It will remain the most powerful accelerator in the world for at least two decades, and its full exploitation is the highest priority in the European Strategy for Particle Physics, adopted by the CERN Council and integrated into the ESFRI Roadmap. To extend its discovery potential, the LHC will need a major upgrade around 2020 to increase its luminosity (rate of collisions) by a factor of 10 beyond its design value. As a highly complex and optimized machine, such an upgrade of the LHC must be carefully studied and requires about 10 years to implement. The novel machine configuration, called High Luminosity LHC (HL-LHC), will rely on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 13 tesla superconducting magnets, very compact and ultra-precise superconduc...

  13. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  14. Multistability in Large Scale Models of Brain Activity.

    Directory of Open Access Journals (Sweden)

    Mathieu Golos

    2015-12-01

    Full Text Available Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i a uniform activation threshold or (ii a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors.

  15. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  16. Renormalization group approach to causal bulk viscous cosmological models

    International Nuclear Information System (INIS)

    Belinchon, J A; Harko, T; Mak, M K

    2002-01-01

    The renormalization group method is applied to the study of homogeneous and flat Friedmann-Robertson-Walker type universes, filled with a causal bulk viscous cosmological fluid. The starting point of the study is the consideration of the scaling properties of the gravitational field equations, the causal evolution equation of the bulk viscous pressure and the equations of state. The requirement of scale invariance imposes strong constraints on the temporal evolution of the bulk viscosity coefficient, temperature and relaxation time, thus leading to the possibility of obtaining the bulk viscosity coefficient-energy density dependence. For a cosmological model with bulk viscosity coefficient proportional to the Hubble parameter, we perform the analysis of the renormalization group flow around the scale-invariant fixed point, thereby obtaining the long-time behaviour of the scale factor

  17. Evaluation of the perceptual grouping parameter in the CTVA model

    Directory of Open Access Journals (Sweden)

    Manuel Cortijo

    2005-01-01

    Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.

  18. Extended Group Contribution Model for Polyfunctional Phase Equilibria

    DEFF Research Database (Denmark)

    Abildskov, Jens

    of physical separation processes. In a thermodynamic sense, design requires detailed knowledge of activity coefficients in the phases at equilibrium. The prediction of these quantities from a minimum of experimental data is the broad scope of this thesis. Adequate equations exist for predicting vapor......Material and energy balances and equilibrium data form the basis of most design calculations. While material and energy balances may be stated without much difficulty, the design engineer is left with a choice between a wide variety of models for describing phase equilibria in the design......-liquid equilibria from data on binary mixtures, composed of structurally simple molecules with a single functional group. More complex is the situation with mixtures composed of structurally more complicated molecules or molecules with more than one functional group. The UNIFAC method is extended to handle...

  19. Trials of large group teaching in Malaysian private universities: a cross sectional study of teaching medicine and other disciplines

    Science.gov (United States)

    2011-01-01

    Background This is a pilot cross sectional study using both quantitative and qualitative approach towards tutors teaching large classes in private universities in the Klang Valley (comprising Kuala Lumpur, its suburbs, adjoining towns in the State of Selangor) and the State of Negeri Sembilan, Malaysia. The general aim of this study is to determine the difficulties faced by tutors when teaching large group of students and to outline appropriate recommendations in overcoming them. Findings Thirty-two academics from six private universities from different faculties such as Medical Sciences, Business, Information Technology, and Engineering disciplines participated in this study. SPSS software was used to analyse the data. The results in general indicate that the conventional instructor-student approach has its shortcoming and requires changes. Interestingly, tutors from Medicine and IT less often faced difficulties and had positive experience in teaching large group of students. Conclusion However several suggestions were proposed to overcome these difficulties ranging from breaking into smaller classes, adopting innovative teaching, use of interactive learning methods incorporating interactive assessment and creative technology which enhanced students learning. Furthermore the study provides insights on the trials of large group teaching which are clearly identified to help tutors realise its impact on teaching. The suggestions to overcome these difficulties and to maximize student learning can serve as a guideline for tutors who face these challenges. PMID:21902839

  20. An Overview of Westinghouse Realistic Large Break LOCA Evaluation Model

    Directory of Open Access Journals (Sweden)

    Cesare Frepoli

    2008-01-01

    Full Text Available Since the 1988 amendment of the 10 CFR 50.46 rule in 1988, Westinghouse has been developing and applying realistic or best-estimate methods to perform LOCA safety analyses. A realistic analysis requires the execution of various realistic LOCA transient simulations where the effect of both model and input uncertainties are ranged and propagated throughout the transients. The outcome is typically a range of results with associated probabilities. The thermal/hydraulic code is the engine of the methodology but a procedure is developed to assess the code and determine its biases and uncertainties. In addition, inputs to the simulation are also affected by uncertainty and these uncertainties are incorporated into the process. Several approaches have been proposed and applied in the industry in the framework of best-estimate methods. Most of the implementations, including Westinghouse, follow the Code Scaling, Applicability and Uncertainty (CSAU methodology. Westinghouse methodology is based on the use of the WCOBRA/TRAC thermal-hydraulic code. The paper starts with an overview of the regulations and its interpretation in the context of realistic analysis. The CSAU roadmap is reviewed in the context of its implementation in the Westinghouse evaluation model. An overview of the code (WCOBRA/TRAC and methodology is provided. Finally, the recent evolution to nonparametric statistics in the current edition of the W methodology is discussed. Sample results of a typical large break LOCA analysis for a PWR are provided.

  1. Diagnosis of abdominal abscess: A large animal model

    International Nuclear Information System (INIS)

    Harper, R.A.; Meek, A.C.; Chidlow, A.D.; Galvin, D.A.J.; McCollum, C.N.

    1988-01-01

    In order to evaluate potential isotopic techniques for the diagnosis of occult sepsis an experimental model in large animals is required. Sponges placed in the abdomen of pigs were injected with mixed colonic bacteria. In 4 animals Kefzol (500 mg IV) and Metronidazole (1 g PR) were administered before the sponges were inserted and compared to 4 given no antibiotics. Finally, in 12 pigs, 20 mls autologous blood was injected into the sponge before antibiotic prophylaxis and bacterial inoculation. 111 In-leucocyte scans and post mortem were then performed 2 weeks later. Without antibiotic cover purulent peritonitis developed in all 4 pigs. Prophylactic antibiotics prevented overwhelming sepsis but at 2 weeks there was only brown fluid surrounding the sponge. Blood added to the sponge produced abscesses in every animal confirmed by leucocytosis of 25.35x10 9 cells/L, 111 In-leucocyte scanning and post mortem. Culturing the thick yellow pus showed a mixed colony of aerobes and anaerobes, similar to those cultured in clinical practice. An intra-abdominal sponge containing blood and faecal organisms in a pig on prophylactic antibiotics reliably produced a chronic abscess. This model is ideal for studies on alternative methods of abscess diagnosis and radiation dosimetry. (orig.)

  2. EXO-ZODI MODELING FOR THE LARGE BINOCULAR TELESCOPE INTERFEROMETER

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Grant M.; Wyatt, Mark C.; Panić, Olja; Shannon, Andrew [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Bailey, Vanessa; Defrère, Denis; Hinz, Philip M.; Rieke, George H.; Skemer, Andrew J.; Su, Katherine Y. L. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Bryden, Geoffrey; Mennesson, Bertrand; Morales, Farisa; Serabyn, Eugene [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Danchi, William C.; Roberge, Aki; Stapelfeldt, Karl R. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics, Code 667, Greenbelt, MD 20771 (United States); Haniff, Chris [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Lebreton, Jérémy [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Millan-Gabet, Rafael [NASA Exoplanet Science Institute, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); and others

    2015-02-01

    Habitable zone dust levels are a key unknown that must be understood to ensure the success of future space missions to image Earth analogs around nearby stars. Current detection limits are several orders of magnitude above the level of the solar system's zodiacal cloud, so characterization of the brightness distribution of exo-zodi down to much fainter levels is needed. To this end, the Large Binocular Telescope Interferometer (LBTI) will detect thermal emission from habitable zone exo-zodi a few times brighter than solar system levels. Here we present a modeling framework for interpreting LBTI observations, which yields dust levels from detections and upper limits that are then converted into predictions and upper limits for the scattered light surface brightness. We apply this model to the HOSTS survey sample of nearby stars; assuming a null depth uncertainty of 10{sup –4} the LBTI will be sensitive to dust a few times above the solar system level around Sun-like stars, and to even lower dust levels for more massive stars.

  3. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  4. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  5. Modeling Perceptual Grouping and Figure-Ground Segregation by Means of Active Reentrant Connections

    Science.gov (United States)

    Sporns, Olaf; Tononi, Giulio; Edelman, Gerald M.

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.

  6. Modeling perceptual grouping and figure-ground segregation by means of active reentrant connections.

    Science.gov (United States)

    Sporns, O; Tononi, G; Edelman, G M

    1991-01-01

    The segmentation of visual scenes is a fundamental process of early vision, but the underlying neural mechanisms are still largely unknown. Theoretical considerations as well as neurophysiological findings point to the importance in such processes of temporal correlations in neuronal activity. In a previous model, we showed that reentrant signaling among rhythmically active neuronal groups can correlate responses along spatially extended contours. We now have modified and extended this model to address the problems of perceptual grouping and figure-ground segregation in vision. A novel feature is that the efficacy of the connections is allowed to change on a fast time scale. This results in active reentrant connections that amplify the correlations among neuronal groups. The responses of the model are able to link the elements corresponding to a coherent figure and to segregate them from the background or from another figure in a way that is consistent with the so-called Gestalt laws.

  7. Modelling of heat transfer during torrefaction of large lignocellulosic biomass

    Science.gov (United States)

    Regmi, Bharat; Arku, Precious; Tasnim, Syeda Humaira; Mahmud, Shohel; Dutta, Animesh

    2018-07-01

    Preparation of feedstock is a major energy intensive process for the thermochemical conversion of biomass into fuel. By eliminating the need to grind biomass prior to the torrefaction process, there would be a potential gain in the energy requirements as the entire step would be eliminated. In regards to a commercialization of torrefaction technology, this study has examined heat transfer inside large cylindrical biomass both numerically and experimentally during torrefaction. A numerical axis-symmetrical 2-D model for heat transfer during torrefaction at 270°C for 1 h was created in COMSOL Multiphysics 5.1 considering heat generation evaluated from the experiment. The model analyzed the temperature distribution within the core and on the surface of biomass during torrefaction for various sizes. The model results showed similarities with experimental results. The effect of L/D ratio on temperature distribution within biomass was observed by varying length and diameter and compared with experiments in literature to find out an optimal range of cylindrical biomass size suitable for torrefaction. The research demonstrated that a cylindrical biomass sample of 50 mm length with L/D ratio of 2 can be torrefied with a core-surface temperature difference of less than 30 °C. The research also demonstrated that sample length has a negligible effect on core-surface temperature difference during torrefaction when the diameter is fixed at 25 mm. This information will help to design a torrefaction processing system and develop a value chain for biomass supply without using an energy-intensive grinding process.

  8. Modelling of heat transfer during torrefaction of large lignocellulosic biomass

    Science.gov (United States)

    Regmi, Bharat; Arku, Precious; Tasnim, Syeda Humaira; Mahmud, Shohel; Dutta, Animesh

    2018-02-01

    Preparation of feedstock is a major energy intensive process for the thermochemical conversion of biomass into fuel. By eliminating the need to grind biomass prior to the torrefaction process, there would be a potential gain in the energy requirements as the entire step would be eliminated. In regards to a commercialization of torrefaction technology, this study has examined heat transfer inside large cylindrical biomass both numerically and experimentally during torrefaction. A numerical axis-symmetrical 2-D model for heat transfer during torrefaction at 270°C for 1 h was created in COMSOL Multiphysics 5.1 considering heat generation evaluated from the experiment. The model analyzed the temperature distribution within the core and on the surface of biomass during torrefaction for various sizes. The model results showed similarities with experimental results. The effect of L/D ratio on temperature distribution within biomass was observed by varying length and diameter and compared with experiments in literature to find out an optimal range of cylindrical biomass size suitable for torrefaction. The research demonstrated that a cylindrical biomass sample of 50 mm length with L/D ratio of 2 can be torrefied with a core-surface temperature difference of less than 30 °C. The research also demonstrated that sample length has a negligible effect on core-surface temperature difference during torrefaction when the diameter is fixed at 25 mm. This information will help to design a torrefaction processing system and develop a value chain for biomass supply without using an energy-intensive grinding process.

  9. Penson-Kolb-Hubbard model: a renormalisation group study

    International Nuclear Information System (INIS)

    Bhattacharyya, Bibhas; Roy, G.K.

    1995-01-01

    The Penson-Kolb-Hubbard (PKH) model in one dimension (1d) by means of real space renormalisation group (RG) method for the half-filled band has been studied. Different phases are identified by studying the RG-flow pattern, the energy gap and different correlation functions. The phase diagram consists of four phases: a spin density wave (SDW), a strong coupling superconducting phase (SSC), a weak coupling superconducting phase (WSC) and a nearly metallic phase. For the negative value of the pair hopping amplitude introduced in this model it was found that the pair-pair correlation indicates a superconducting phase for which the centre-of-mass of the pairs move with a momentum π. (author). 7 refs., 4 figs

  10. Functional renormalization group study of the Anderson–Holstein model

    International Nuclear Information System (INIS)

    Laakso, M A; Kennes, D M; Jakobs, S G; Meden, V

    2014-01-01

    We present a comprehensive study of the spectral and transport properties in the Anderson–Holstein model both in and out of equilibrium using the functional renormalization group (fRG). We show how the previously established machinery of Matsubara and Keldysh fRG can be extended to include the local phonon mode. Based on the analysis of spectral properties in equilibrium we identify different regimes depending on the strength of the electron–phonon interaction and the frequency of the phonon mode. We supplement these considerations with analytical results from the Kondo model. We also calculate the nonlinear differential conductance through the Anderson–Holstein quantum dot and find clear signatures of the presence of the phonon mode. (paper)

  11. Environmental Impacts of Large Scale Biochar Application Through Spatial Modeling

    Science.gov (United States)

    Huber, I.; Archontoulis, S.

    2017-12-01

    In an effort to study the environmental (emissions, soil quality) and production (yield) impacts of biochar application at regional scales we coupled the APSIM-Biochar model with the pSIMS parallel platform. So far the majority of biochar research has been concentrated on lab to field studies to advance scientific knowledge. Regional scale assessments are highly needed to assist decision making. The overall objective of this simulation study was to identify areas in the USA that have the most gain environmentally from biochar's application, as well as areas which our model predicts a notable yield increase due to the addition of biochar. We present the modifications in both APSIM biochar and pSIMS components that were necessary to facilitate these large scale model runs across several regions in the United States at a resolution of 5 arcminutes. This study uses the AgMERRA global climate data set (1980-2010) and the Global Soil Dataset for Earth Systems modeling as a basis for creating its simulations, as well as local management operations for maize and soybean cropping systems and different biochar application rates. The regional scale simulation analysis is in progress. Preliminary results showed that the model predicts that high quality soils (particularly those common to Iowa cropping systems) do not receive much, if any, production benefit from biochar. However, soils with low soil organic matter ( 0.5%) do get a noteworthy yield increase of around 5-10% in the best cases. We also found N2O emissions to be spatial and temporal specific; increase in some areas and decrease in some other areas due to biochar application. In contrast, we found increases in soil organic carbon and plant available water in all soils (top 30 cm) due to biochar application. The magnitude of these increases (% change from the control) were larger in soil with low organic matter (below 1.5%) and smaller in soils with high organic matter (above 3%) and also dependent on biochar

  12. Group navigation and the "many-wrongs principle" in models of animal movement.

    Science.gov (United States)

    Codling, E A; Pitchford, J W; Simpson, S D

    2007-07-01

    Traditional studies of animal navigation over both long and short distances have usually considered the orientation ability of the individual only, without reference to the implications of group membership. However, recent work has suggested that being in a group can significantly improve the ability of an individual to align toward and reach a target direction or point, even when all group members have limited navigational ability and there are no leaders. This effect is known as the "many-wrongs principle" since the large number of individual navigational errors across the group are suppressed by interactions and group cohesion. In this paper, we simulate the many-wrongs principle using a simple individual-based model of movement based on a biased random walk that includes group interactions. We study the ability of the group as a whole to reach a target given different levels of individual navigation error, group size, interaction radius, and environmental turbulence. In scenarios with low levels of environmental turbulence, simulation results demonstrate a navigational benefit from group membership, particularly for small group sizes. In contrast, when movement takes place in a highly turbulent environment, simulation results suggest that the best strategy is to navigate as individuals rather than as a group.

  13. Renormalization group approach to a p-wave superconducting model

    International Nuclear Information System (INIS)

    Continentino, Mucio A.; Deus, Fernanda; Caldas, Heron

    2014-01-01

    We present in this work an exact renormalization group (RG) treatment of a one-dimensional p-wave superconductor. The model proposed by Kitaev consists of a chain of spinless fermions with a p-wave gap. It is a paradigmatic model of great actual interest since it presents a weak pairing superconducting phase that has Majorana fermions at the ends of the chain. Those are predicted to be useful for quantum computation. The RG allows to obtain the phase diagram of the model and to study the quantum phase transition from the weak to the strong pairing phase. It yields the attractors of these phases and the critical exponents of the weak to strong pairing transition. We show that the weak pairing phase of the model is governed by a chaotic attractor being non-trivial from both its topological and RG properties. In the strong pairing phase the RG flow is towards a conventional strong coupling fixed point. Finally, we propose an alternative way for obtaining p-wave superconductivity in a one-dimensional system without spin–orbit interaction.

  14. Description of group-theoretical model of developed turbulence

    International Nuclear Information System (INIS)

    Saveliev, V L; Gorokhovski, M A

    2008-01-01

    We propose to associate the phenomenon of stationary turbulence with the special self-similar solutions of the Euler equations. These solutions represent the linear superposition of eigenfields of spatial symmetry subgroup generators and imply their dependence on time through the parameter of the symmetry transformation only. From this model, it follows that for developed turbulent process, changing the scale of averaging (filtering) of the velocity field is equivalent to composition of scaling, translation and rotation transformations. We call this property a renormalization-group invariance of filtered turbulent fields. The renormalization group invariance provides an opportunity to transform the averaged Navier-Stokes equation over a small scale (inner threshold of the turbulence) to larger scales by simple scaling. From the methodological point of view, it is significant to note that the turbulent viscosity term appeared not as a result of averaging of the nonlinear term in the Navier-Stokes equation, but from the molecular viscosity term with the help of renormalization group transformation.

  15. On the standard model group in F-theory

    International Nuclear Information System (INIS)

    Choi, Kang-Sin

    2014-01-01

    We analyze the standard model gauge group SU(3) x SU(2) x U(1) constructed in F-theory. The non-Abelian part SU(3) x SU(2) is described by a surface singularity of Kodaira type. Blow-up analysis shows that the non-Abelian part is distinguished from the naive product of SU(3) and SU(2), but that it should be a rank three group along the chain of E n groups, because it has non-generic gauge symmetry enhancement structure responsible for desirablematter curves. The Abelian part U(1) is constructed from a globally valid two-form with the desired gauge quantum numbers, using a similar method to the decomposition (factorization) method of the spectral cover. This technique makes use of an extra section in the elliptic fiber of the Calabi-Yau manifold, on which F-theory is compactified. Conventional gauge coupling unification of SU(5) is achieved, without requiring a threshold correction from the flux along the hypercharge direction. (orig.)

  16. Progress in lung modelling by the ICRP Task Group

    International Nuclear Information System (INIS)

    James, A.C.; Birchall, A.

    1989-01-01

    The Task Group has reviewed the data on: (a) morphology and physiology of the human respiratory tract; (b) inspirability of aerosols and their deposition in anatomical regions as functions of respiratory parameters; (c) clearance of particles within and from the respiratory tract; (d) absorption of different materials into the blood in humans and in animals. The Task Group proposes a new model which predicts the deposition, retention and systemic uptake of materials, enabling doses absorbed by different respiratory tissues and other body organs to be evaluated. In the proposed model, clearance is described in terms of competition between the processes moving particles to the oropharynx or to lymph nodes and that of absorption into the blood. From studies with human subjects, characteristic rates and pathways are defined to represent mechanical clearance of particles from each region, which do not depend on the material. Conversely, the absorption rate is determined solely by the material: it is assumed to be the same in all parts of the respiratory tract and in other animal species. For several of the radiologically important forms of actinides, absorption rates can be derived from animal experiments, or, in some cases, directly from human data. Otherwise, default values are used, based on the current D, W and Y classification system. (author)

  17. Renormalization group flow of scalar models in gravity

    International Nuclear Information System (INIS)

    Guarnieri, Filippo

    2014-01-01

    In this Ph.D. thesis we study the issue of renormalizability of gravitation in the context of the renormalization group (RG), employing both perturbative and non-perturbative techniques. In particular, we focus on different gravitational models and approximations in which a central role is played by a scalar degree of freedom, since their RG flow is easier to analyze. We restrict our interest in particular to two quantum gravity approaches that have gained a lot of attention recently, namely the asymptotic safety scenario for gravity and the Horava-Lifshitz quantum gravity. In the so-called asymptotic safety conjecture the high energy regime of gravity is controlled by a non-Gaussian fixed point which ensures non-perturbative renormalizability and finiteness of the correlation functions. We then investigate the existence of such a non trivial fixed point using the functional renormalization group, a continuum version of the non-perturbative Wilson's renormalization group. In particular we quantize the sole conformal degree of freedom, which is an approximation that has been shown to lead to a qualitatively correct picture. The question of the existence of a non-Gaussian fixed point in an infinite-dimensional parameter space, that is for a generic f(R) theory, cannot however be studied using such a conformally reduced model. Hence we study it by quantizing a dynamically equivalent scalar-tensor theory, i.e. a generic Brans-Dicke theory with ω=0 in the local potential approximation. Finally, we investigate, using a perturbative RG scheme, the asymptotic freedom of the Horava-Lifshitz gravity, that is an approach based on the emergence of an anisotropy between space and time which lifts the Newton's constant to a marginal coupling and explicitly preserves unitarity. In particular we evaluate the one-loop correction in 2+1 dimensions quantizing only the conformal degree of freedom.

  18. An Example of Large-group Drama and Cross-year Peer Assessment for Teaching Science in Higher Education

    Science.gov (United States)

    Sloman, Katherine; Thompson, Richard

    2010-09-01

    Undergraduate students pursuing a three-year marine biology degree programme (n = 86) experienced a large-group drama aimed at allowing them to explore how scientific research is funded and the associated links between science and society. In the drama, Year 1 students played the "general public" who decided which environmental research areas should be prioritised for funding, Year 2 students were the "scientists" who had to prepare research proposals which they hoped to get funded, and Year 3 students were the "research panel" who decided which proposals to fund with input from the priorities set by the "general public". The drama, therefore, included an element of cross-year peer assessment where Year 3 students evaluated the research proposals prepared by the Year 2 students. Questionnaires were distributed at the end of the activity to gather: (1) student perceptions on the cross-year nature of the exercise, (2) the use of peer assessment, and (3) their overall views on the drama. The students valued the opportunity to interact with their peers from other years of the degree programme and most were comfortable with the use of cross-year peer assessment. The majority of students felt that they had increased their knowledge of how research proposals are funded and the perceived benefits of the large-group drama included increased critical thinking ability, confidence in presenting work to others, and enhanced communication skills. Only one student did not strongly advocate the use of this large-group drama in subsequent years.

  19. Analysis and Design Environment for Large Scale System Models and Collaborative Model Development, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — As NASA modeling efforts grow more complex and more distributed among many working groups, new tools and technologies are required to integrate their efforts...

  20. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  1. The group-as-a-whole-object relations model of group psychotherapy.

    Science.gov (United States)

    Rosen, D; Stukenberg, K W; Saeks, S

    2001-01-01

    The authors review the theoretical basis of group psychotherapy performed at The Menninger Clinic and demonstrate how the theory has been put into practice on two different types of inpatient units. The fundamental elements of the theory and practice used can be traced to object relations theory as originally proposed by Melanie Klein. Her work with individuals was directly applied to working with groups by Ezriel and Bion, who focused on interpreting group tension. More modern approaches have reintegrated working with individual concerns while also attending to the group-as-a-whole. Historically, these principles have been applied to long-term group treatment. The authors apply the concepts from the group-as-a-whole literature to short- and medium-length inpatient groups with open membership. They offer clinical examples of the application of these principles in short-term inpatient settings in groups with open membership.

  2. Lumped hydrological models is an Occam' razor for runoff modeling in large Russian Arctic basins

    OpenAIRE

    Ayzel Georgy

    2018-01-01

    This study is aimed to investigate the possibility of three lumped hydrological models to predict daily runoff of large-scale Arctic basins for the modern period (1979-2014) in the case of substantial data scarcity. All models were driven only by meteorological forcing reanalysis dataset without any additional information about landscape, soil or vegetation cover properties of studied basins. We found limitations of model parameters calibration in ungauged basins using global optimization alg...

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  4. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  5. Exploring the Impact of Students' Learning Approach on Collaborative Group Modeling of Blood Circulation

    Science.gov (United States)

    Lee, Shinyoung; Kang, Eunhee; Kim, Heui-Baik

    2015-01-01

    This study aimed to explore the effect on group dynamics of statements associated with deep learning approaches (DLA) and their contribution to cognitive collaboration and model development during group modeling of blood circulation. A group was selected for an in-depth analysis of collaborative group modeling. This group constructed a model in a…

  6. Working Group Reports: Working Group 1 - Software Systems Design and Implementation for Environmental Modeling

    Science.gov (United States)

    The purpose of the Interagency Steering Committee on Multimedia Environmental Modeling (ISCMEM) is to foster the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases that are all in the public domain. It is compos...

  7. The positive group affect spiral : a dynamic model of the emergence of positive affective similarity in work groups

    NARCIS (Netherlands)

    Walter, F.; Bruch, H.

    This conceptual paper seeks to clarify the process of the emergence of positive collective affect. Specifically, it develops a dynamic model of the emergence of positive affective similarity in work groups. It is suggested that positive group affective similarity and within-group relationship

  8. Key Informant Models for Measuring Group-Level Variables in Small Groups: Application to Plural Subject Theory

    Science.gov (United States)

    Algesheimer, René; Bagozzi, Richard P.; Dholakia, Utpal M.

    2018-01-01

    We offer a new conceptualization and measurement models for constructs at the group-level of analysis in small group research. The conceptualization starts with classical notions of group behavior proposed by Tönnies, Simmel, and Weber and then draws upon plural subject theory by philosophers Gilbert and Tuomela to frame a new perspective…

  9. Phase structure of NJL model with weak renormalization group

    Science.gov (United States)

    Aoki, Ken-Ichi; Kumamoto, Shin-Ichiro; Yamada, Masatoshi

    2018-06-01

    We analyze the chiral phase structure of the Nambu-Jona-Lasinio model at finite temperature and density by using the functional renormalization group (FRG). The renormalization group (RG) equation for the fermionic effective potential V (σ ; t) is given as a partial differential equation, where σ : = ψ bar ψ and t is a dimensionless RG scale. When the dynamical chiral symmetry breaking (DχSB) occurs at a certain scale tc, V (σ ; t) has singularities originated from the phase transitions, and then one cannot follow RG flows after tc. In this study, we introduce the weak solution method to the RG equation in order to follow the RG flows after the DχSB and to evaluate the dynamical mass and the chiral condensate in low energy scales. It is shown that the weak solution of the RG equation correctly captures vacuum structures and critical phenomena within the pure fermionic system. We show the chiral phase diagram on temperature, chemical potential and the four-Fermi coupling constant.

  10. What determines area burned in large landscapes? Insights from a decade of comparative landscape-fire modelling

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan; Ian D. Davies; Russ A. Parsons

    2015-01-01

    Understanding what determines area burned in large landscapes is critical for informing wildland fire management in fire-prone environments and for representing fire activity in Dynamic Global Vegetation Models. For the past ten years, a group of landscape-fire modellers have been exploring the relative influence of key determinants of area burned in temperate and...

  11. Analysis of 16S libraries of mouse gastrointestinal microflora reveals a large new group of mouse intestinal bacteria.

    Science.gov (United States)

    Salzman, Nita H; de Jong, Hendrik; Paterson, Yvonne; Harmsen, Hermie J M; Welling, Gjalt W; Bos, Nicolaas A

    2002-11-01

    Total genomic DNA from samples of intact mouse small intestine, large intestine, caecum and faeces was used as template for PCR amplification of 16S rRNA gene sequences with conserved bacterial primers. Phylogenetic analysis of the amplification products revealed 40 unique 16S rDNA sequences. Of these sequences, 25% (10/40) corresponded to described intestinal organisms of the mouse, including Lactobacillus spp., Helicobacter spp., segmented filamentous bacteria and members of the altered Schaedler flora (ASF360, ASF361, ASF502 and ASF519); 75% (30/40) represented novel sequences. A large number (11/40) of the novel sequences revealed a new operational taxonomic unit (OTU) belonging to the Cytophaga-Flavobacter-Bacteroides phylum, which the authors named 'mouse intestinal bacteria'. 16S rRNA probes were developed for this new OTU. Upon analysis of the novel sequences, eight were found to cluster within the Eubacterium rectale-Clostridium coccoides group and three clustered within the Bacteroides group. One of the novel sequences was distantly related to Verrucomicrobium spinosum and one was distantly related to Bacillus mycoides. Oligonucleotide probes specific for the 16S rRNA of these novel clones were generated. Using a combination of four previously described and four newly designed probes, approximately 80% of bacteria recovered from the murine large intestine and 71% of bacteria recovered from the murine caecum could be identified by fluorescence in situ hybridization (FISH).

  12. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  13. Many large medical groups will need to acquire new skills and tools to be ready for payment reform.

    Science.gov (United States)

    Mechanic, Robert; Zinner, Darren E

    2012-09-01

    Federal and state policy makers are now experimenting with programs that hold health systems accountable for delivering care under predetermined budgets to help control health care spending. To assess how well prepared medical groups are to participate in these arrangements, we surveyed twenty-one large, multispecialty groups. We evaluated their participation in risk contracts such as capitation and the degree of operational support associated with these arrangements. On average, about 25 percent of the surveyed groups' patient care revenue stemmed from global capitation contracts and 9 percent from partial capitation or shared risk contracts. Groups with a larger share of revenue from risk contracts were more likely than others to have salaried physicians, advanced data management capabilities, preferred relationships with efficient specialists, and formal programs to coordinate care for high-risk patients. Our findings suggest that medical groups that lack risk contracting experience may need to develop new competencies and infrastructure to successfully navigate federal payment reform programs, including information systems that track performance and support clinicians in delivering good care; physician-level reward systems that are aligned with organizational goals; sound physician leadership; and an organizational commitment to supporting performance improvement. The difficulty of implementing these changes in complex health care organizations should not be underestimated.

  14. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  15. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  16. Establishing the pig as a large animal model for vaccine development against human cancer

    DEFF Research Database (Denmark)

    Overgaard, Nana Haahr; Frøsig, Thomas Mørch; Welner, Simon

    2015-01-01

    Immunotherapy has increased overall survival of metastatic cancer patients, and cancer antigens are promising vaccine targets. To fulfill the promise, appropriate tailoring of the vaccine formulations to mount in vivo cytotoxic T cell (CTL) responses toward co-delivered cancer antigens is essential...... and the porcine immunome is closer related to the human counterpart, we here introduce pigs as a supplementary large animal model for human cancer vaccine development. IDO and RhoC, both important in human cancer development and progression, were used as vaccine targets and 12 pigs were immunized with overlapping......C-derived peptides across all groups with no adjuvant being superior. These findings support the further use of pigs as a large animal model for vaccine development against human cancer....

  17. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  18. METHODOLOGY AND CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    WEBER RA

    2009-01-16

    The Hanford Site contains 177 large underground radioactive waste storage tanks (28 double-shell tanks and 149 single-shell tanks). These tanks are categorized into one of three waste groups (A, B, and C) based on their waste and tank characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement gas release event. Assignments of waste groups to the 177 double-shell tanks and single-shell tanks, as reported in this document, are based on a Monte Carlo analysis of three criteria. The first criterion is the headspace flammable gas concentration following release of retained gas. This criterion determines whether the tank contains sufficient retained gas such that the well-mixed headspace flammable gas concentration would reach 100% of the lower flammability limit if the entire tank's retained gas were released. If the volume of retained gas is not sufficient to reach 100% of the lower flammability limit, then flammable conditions cannot be reached and the tank is classified as a waste group C tank independent of the method the gas is released. The second criterion is the energy ratio and considers whether there is sufficient supernatant on top of the saturated solids such that gas-bearing solids have the potential energy required to break up the material and release gas. Tanks that are not waste group C tanks and that have an energy ratio < 3.0 do not have sufficient potential energy to break up material and release gas and are assigned to waste group B. These tanks are considered to represent a potential induced flammable gas release hazard, but no spontaneous buoyant displacement flammable gas release hazard. Tanks that are not waste group C tanks and have an energy ratio {ge} 3.0, but that pass the third criterion (buoyancy ratio < 1.0, see below) are also assigned to waste group B. Even though the designation as

  19. METHODOLOGY AND CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    FOWLER KD

    2007-12-27

    This document categorizes each of the large waste storage tanks into one of several categories based on each tank's waste characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement event. Revision 7 is the annual update of the calculations of the flammable gas Waste Groups for DSTs and SSTs. The Hanford Site contains 177 large underground radioactive waste storage tanks (28 double-shell tanks and 149 single-shell tanks). These tanks are categorized into one of three waste groups (A, B, and C) based on their waste and tank characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement gas release event. Assignments of waste groups to the 177 double-shell tanks and single-shell tanks, as reported in this document, are based on a Monte Carlo analysis of three criteria. The first criterion is the headspace flammable gas concentration following release of retained gas. This criterion determines whether the tank contains sufficient retained gas such that the well-mixed headspace flammable gas concentration would reach 100% of the lower flammability limit if the entire tank's retained gas were released. If the volume of retained gas is not sufficient to reach 100% of the lower flammability limit, then flammable conditions cannot be reached and the tank is classified as a waste group C tank independent of the method the gas is released. The second criterion is the energy ratio and considers whether there is sufficient supernatant on top of the saturated solids such that gas-bearing solids have the potential energy required to break up the material and release gas. Tanks that are not waste group C tanks and that have an energy ratio < 3.0 do not have sufficient

  20. DMPy: a Python package for automated mathematical model construction of large-scale metabolic systems.

    Science.gov (United States)

    Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian

    2018-06-19

    Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the

  1. Computer-aided polymer design using group contribution plus property models

    DEFF Research Database (Denmark)

    Satyanarayana, Kavitha Chelakara; Abildskov, Jens; Gani, Rafiqul

    2009-01-01

    . Polymer repeat unit property prediction models are required to calculate the properties of the generated repeat units. A systematic framework incorporating recently developed group contribution plus (GC(+)) models and an extended CAMD technique to include design of polymer repeat units is highlighted...... in this paper. The advantage of a GC(+) model in CAMD applications is that a very large number of polymer structures can be considered even though some of the group parameters may not be available. A number of case studies involving different polymer design problems have been solved through the developed......The preliminary step for polymer product design is to identify the basic repeat unit structure of the polymer that matches the target properties. Computer-aided molecular design (CAMD) approaches can be applied for generating the polymer repeat unit structures that match the required constraints...

  2. Modeling phytoplankton community in reservoirs. A comparison between taxonomic and functional groups-based models.

    Science.gov (United States)

    Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina

    2016-01-01

    In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A quantitative genetic model of reciprocal altruism: a condition for kin or group selection to prevail.

    Science.gov (United States)

    Aoki, K

    1983-01-01

    A condition is derived for reciprocal altruism to evolve by kin or group selection. It is assumed that many additively acting genes of small effect and the environment determine the probability that an individual is a reciprocal altruist, as opposed to being unconditionally selfish. The particular form of reciprocal altruism considered is TIT FOR TAT, a strategy that involves being altruistic on the first encounter with another individual and doing whatever the other did on the previous encounter in subsequent encounters with the same individual. Encounters are restricted to individuals of the same generation belonging to the same kin or breeding group, but first encounters occur at random within that group. The number of individuals with which an individual interacts is assumed to be the same within any kin or breeding group. There are 1 + i expected encounters between two interacting individuals. On any encounter, it is assumed that an individual who behaves altruistically suffers a cost in personal fitness proportional to c while improving his partner's fitness by the same proportion of b. Then, the condition for kin or group selection to prevail is [Formula: see text] if group size is sufficiently large and the group mean and the within-group genotypic variance of the trait value (i.e., the probability of being a TIT-FOR-TAT strategist) are uncorrelated. Here, C, Vb, and Tb are the population mean, between-group variance, and between-group third central moment of the trait value and r is the correlation between the additive genotypic values of interacting kin or of individuals within the same breeding group. The right-hand side of the above inequality is monotone decreasing in C if we hold Tb/Vb constant, and kin and group selection become superfluous beyond a certain threshold value of C. The effect of finite group size is also considered in a kin-selection model. PMID:6575395

  4. Group Contribution Based Process Flowsheet Synthesis, Design and Modelling

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2005-01-01

    In a group contribution method for pure component property prediction, a molecule is described as a set of groups linked together to form a molecular structure. In the same way, for flowsheet "property" prediction, a flowsheet can be described as a set of process-groups linked together to represent...... the flowsheet structure. Just as a functional group is a collection of atoms, a process-group is a collection of operations forming an "unit" operation or a set of "unit" operations. The link between the process-groups are the streams similar to the bonds that are attachments to atoms/groups. Each process-group...... provides a contribution to the "property" of the flowsheet, which can be performance in terms of energy consumption, thereby allowing a flowsheet "property" to be calculated, once it is described by the groups. Another feature of this approach is that the process-group attachments provide automatically...

  5. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  6. Using the IGCRA (individual, group, classroom reflective action technique to enhance teaching and learning in large accountancy classes

    Directory of Open Access Journals (Sweden)

    Cristina Poyatos

    2011-02-01

    Full Text Available First year accounting has generally been perceived as one of the more challenging first year business courses for university students. Various Classroom Assessment Techniques (CATs have been proposed to attempt to enrich and enhance student learning, with these studies generally positioning students as learners alone. This paper uses an educational case study approach and examines the implementation of the IGCRA (individual, group, classroom reflective action technique, a Classroom Assessment Technique, on first year accounting students’ learning performance. Building on theoretical frameworks in the areas of cognitive learning, social development, and dialogical learning, the technique uses reports to promote reflection on both learning and teaching. IGCRA was found to promote feedback on the effectiveness of student, as well as teacher satisfaction. Moreover, the results indicated formative feedback can assist to improve the learning and learning environment for a large group of first year accounting students. Clear guidelines for its implementation are provided in the paper.

  7. Environmental Disturbance Modeling for Large Inflatable Space Structures

    National Research Council Canada - National Science Library

    Davis, Donald

    2001-01-01

    Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...

  8. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...

  9. Testing Group Mean Differences of Latent Variables in Multilevel Data Using Multiple-Group Multilevel CFA and Multilevel MIMIC Modeling.

    Science.gov (United States)

    Kim, Eun Sook; Cao, Chunhua

    2015-01-01

    Considering that group comparisons are common in social science, we examined two latent group mean testing methods when groups of interest were either at the between or within level of multilevel data: multiple-group multilevel confirmatory factor analysis (MG ML CFA) and multilevel multiple-indicators multiple-causes modeling (ML MIMIC). The performance of these methods were investigated through three Monte Carlo studies. In Studies 1 and 2, either factor variances or residual variances were manipulated to be heterogeneous between groups. In Study 3, which focused on within-level multiple-group analysis, six different model specifications were considered depending on how to model the intra-class group correlation (i.e., correlation between random effect factors for groups within cluster). The results of simulations generally supported the adequacy of MG ML CFA and ML MIMIC for multiple-group analysis with multilevel data. The two methods did not show any notable difference in the latent group mean testing across three studies. Finally, a demonstration with real data and guidelines in selecting an appropriate approach to multilevel multiple-group analysis are provided.

  10. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Directory of Open Access Journals (Sweden)

    Daniel P Hinton

    Full Text Available The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674 of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  11. A large-scale examination of the effectiveness of anonymous marking in reducing group performance differences in higher education assessment.

    Science.gov (United States)

    Hinton, Daniel P; Higson, Helen

    2017-01-01

    The present research aims to more fully explore the issues of performance differences in higher education assessment, particularly in the context of a common measure taken to address them. The rationale for the study is that, while performance differences in written examinations are relatively well researched, few studies have examined the efficacy of anonymous marking in reducing these performance differences, particularly in modern student populations. By examining a large archive (N = 30674) of assessment data spanning a twelve-year period, the relationship between assessment marks and factors such as ethnic group, gender and socio-environmental background was investigated. In particular, analysis focused on the impact that the implementation of anonymous marking for assessment of written examinations and coursework has had on the magnitude of mean score differences between demographic groups of students. While group differences were found to be pervasive in higher education assessment, these differences were observed to be relatively small in practical terms. Further, it appears that the introduction of anonymous marking has had a negligible effect in reducing them. The implications of these results are discussed, focusing on two issues, firstly a defence of examinations as a fair and legitimate form of assessment in Higher Education, and, secondly, a call for the re-examination of the efficacy of anonymous marking in reducing group performance differences.

  12. ON range searching in the group model and combinatorial discrepancy

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    2014-01-01

    In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_u t_q=\\Omega(\\mbox{disc}^2......)$, where $t_u$ is the worst case update time, $t_q$ is the worst case query time, and disc is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching...... problems. We list a few of them in the following: (1) For $d$-dimensional halfspace range searching, we get a lower bound of $t_u t_q=\\Omega(n^{1-1/d})$. This comes within an lg lg $n$ factor of the best known upper bound. (2) For orthogonal range searching, we get a lower bound of $t_u t...

  13. Affine group formulation of the Standard Model coupled to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Ching-Yi, E-mail: l2897107@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China); Ita, Eyo, E-mail: ita@usna.edu [Department of Physics, US Naval Academy, Annapolis, MD (United States); Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Taiwan (China)

    2014-04-15

    In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.

  14. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  15. Modelling the exposure of wildlife to radiation: key findings and activities of IAEA working groups

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A. [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi; Vandenhove, Hildegarde [Belgian Nuclear Research Centre, Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Beaugelin-Seiller, Karine [Institut de Radioprotection et de Surete Nucleaire (IRSN), PRP-ENV, SERIS, LM2E, Cadarache (France); Johansen, Mathew P. [ANSTO Australian Nuclear Science and Technology Organisation, New Illawarra Rd, Menai, NSW (Australia); Goulet, Richard [Canadian Nuclear Safety Commission, Environmental Risk Assessment Division, 280 Slater, Ottawa, K1A0H3 (Canada); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Ruedig, Elizabeth [Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins (United States); Stark, Karolina; Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Andersson, Pal [Swedish Radiation Safety Authority, SE-171 16, Stockholm (Sweden); Copplestone, David [Biological and Environmental Sciences, University of Stirling, Stirling, FK9 4LA (United Kingdom); Yankovich, Tamara L.; Fesenko, Sergey [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria)

    2014-07-01

    In total, participants from 14 countries, representing 19 organisations, actively participated in the model application/inter-comparison activities of the IAEA's EMRAS II programme Biota Modelling Group. A range of models/approaches were used by participants (e.g. the ERICA Tool, RESRAD-BIOTA, the ICRP Framework). The agreed objectives of the group were: 'To improve Member State's capabilities for protection of the environment by comparing and validating models being used, or developed, for biota dose assessment (that may be used) as part of the regulatory process of licensing and compliance monitoring of authorised releases of radionuclides.' The activities of the group, the findings of which will be described, included: - An assessment of the predicted unweighted absorbed dose rates for 74 radionuclides estimated by 10 approaches for five of the ICRPs Reference Animal and Plant geometries assuming 1 Bq per unit organism or media. - Modelling the effect of heterogeneous distributions of radionuclides in sediment profiles on the estimated exposure of organisms. - Model prediction - field data comparisons for freshwater ecosystems in a uranium mining area and a number of wetland environments. - An evaluation of the application of available models to a scenario considering radioactive waste buried in shallow trenches. - Estimating the contribution of {sup 235}U to dose rates in freshwater environments. - Evaluation of the factors contributing to variation in modelling results. The work of the group continues within the framework of the IAEA's MODARIA programme, which was initiated in 2012. The work plan of the MODARIA working group has largely been defined by the findings of the previous EMRAS programme. On-going activities of the working group, which will be described, include the development of a database of dynamic parameters for wildlife dose assessment and exercises involving modelling the exposure of organisms in the marine coastal

  16. Hydrological-niche models predict water plant functional group distributions in diverse wetland types.

    Science.gov (United States)

    Deane, David C; Nicol, Jason M; Gehrig, Susan L; Harding, Claire; Aldridge, Kane T; Goodman, Abigail M; Brookes, Justin D

    2017-06-01

    Human use of water resources threatens environmental water supplies. If resource managers are to develop policies that avoid unacceptable ecological impacts, some means to predict ecosystem response to changes in water availability is necessary. This is difficult to achieve at spatial scales relevant for water resource management because of the high natural variability in ecosystem hydrology and ecology. Water plant functional groups classify species with similar hydrological niche preferences together, allowing a qualitative means to generalize community responses to changes in hydrology. We tested the potential for functional groups in making quantitative prediction of water plant functional group distributions across diverse wetland types over a large geographical extent. We sampled wetlands covering a broad range of hydrogeomorphic and salinity conditions in South Australia, collecting both hydrological and floristic data from 687 quadrats across 28 wetland hydrological gradients. We built hydrological-niche models for eight water plant functional groups using a range of candidate models combining different surface inundation metrics. We then tested the predictive performance of top-ranked individual and averaged models for each functional group. Cross validation showed that models achieved acceptable predictive performance, with correct classification rates in the range 0.68-0.95. Model predictions can be made at any spatial scale that hydrological data are available and could be implemented in a geographical information system. We show the response of water plant functional groups to inundation is consistent enough across diverse wetland types to quantify the probability of hydrological impacts over regional spatial scales. © 2017 by the Ecological Society of America.

  17. Large scale hydro-economic modelling for policy support

    Science.gov (United States)

    de Roo, Ad; Burek, Peter; Bouraoui, Faycal; Reynaud, Arnaud; Udias, Angel; Pistocchi, Alberto; Lanzanova, Denis; Trichakis, Ioannis; Beck, Hylke; Bernhard, Jeroen

    2014-05-01

    To support European Union water policy making and policy monitoring, a hydro-economic modelling environment has been developed to assess optimum combinations of water retention measures, water savings measures, and nutrient reduction measures for continental Europe. This modelling environment consists of linking the agricultural CAPRI model, the LUMP land use model, the LISFLOOD water quantity model, the EPIC water quality model, the LISQUAL combined water quantity, quality and hydro-economic model, and a multi-criteria optimisation routine. With this modelling environment, river basin scale simulations are carried out to assess the effects of water-retention measures, water-saving measures, and nutrient-reduction measures on several hydro-chemical indicators, such as the Water Exploitation Index (WEI), Nitrate and Phosphate concentrations in rivers, the 50-year return period river discharge as an indicator for flooding, and economic losses due to water scarcity for the agricultural sector, the manufacturing-industry sector, the energy-production sector and the domestic sector, as well as the economic loss due to flood damage. Recently, this model environment is being extended with a groundwater model to evaluate the effects of measures on the average groundwater table and available resources. Also, water allocation rules are addressed, while having environmental flow included as a minimum requirement for the environment. Economic functions are currently being updated as well. Recent development and examples will be shown and discussed, as well as open challenges.

  18. Clinical utility of the Prostate Health Index (phi) for biopsy decision management in a large group urology practice setting.

    Science.gov (United States)

    White, Jay; Shenoy, B Vittal; Tutrone, Ronald F; Karsh, Lawrence I; Saltzstein, Daniel R; Harmon, William J; Broyles, Dennis L; Roddy, Tamra E; Lofaro, Lori R; Paoli, Carly J; Denham, Dwight; Reynolds, Mark A

    2018-04-01

    Deciding when to biopsy a man with non-suspicious DRE findings and tPSA in the 4-10 ng/ml range can be challenging, because two-thirds of such biopsies are typically found to be benign. The Prostate Health Index (phi) exhibits significantly improved diagnostic accuracy for prostate cancer detection when compared to tPSA and %fPSA, however only one published study to date has investigated its impact on biopsy decisions in clinical practice. An IRB approved observational study was conducted at four large urology group practices using a physician reported two-part questionnaire. Physician recommendations were recorded before and after receiving the phi test result. A historical control group was queried from each site's electronic medical records for eligible men who were seen by the same participating urologists prior to the implementation of the phi test in their practice. 506 men receiving a phi test were prospectively enrolled and 683 men were identified for the historical control group (without phi). Biopsy and pathological findings were also recorded for both groups. Men receiving a phi test showed a significant reduction in biopsy procedures performed when compared to the historical control group (36.4% vs. 60.3%, respectively, P phi score impacted the physician's patient management plan in 73% of cases, including biopsy deferrals when the phi score was low, and decisions to perform biopsies when the phi score indicated an intermediate or high probability of prostate cancer (phi ≥36). phi testing significantly impacted the physician's biopsy decision for men with tPSA in the 4-10 ng/ml range and non-suspicious DRE findings. Appropriate utilization of phi resulted in a significant reduction in biopsy procedures performed compared to historical patients seen by the same participating urologists who would have met enrollment eligibility but did not receive a phi test.

  19. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  20. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  1. A hierarchical causal modeling for large industrial plants supervision

    International Nuclear Information System (INIS)

    Dziopa, P.; Leyval, L.

    1994-01-01

    A supervision system has to analyse the process current state and the way it will evolve after a modification of the inputs or disturbance. It is proposed to base this analysis on a hierarchy of models, witch differ by the number of involved variables and the abstraction level used to describe their temporal evolution. In a first step, special attention is paid to causal models building, from the most abstract one. Once the hierarchy of models has been build, the most detailed model parameters are estimated. Several models of different abstraction levels can be used for on line prediction. These methods have been applied to a nuclear reprocessing plant. The abstraction level could be chosen on line by the operator. Moreover when an abnormal process behaviour is detected a more detailed model is automatically triggered in order to focus the operator attention on the suspected subsystem. (authors). 11 refs., 11 figs

  2. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    Science.gov (United States)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  3. Truck Route Choice Modeling using Large Streams of GPS Data

    Science.gov (United States)

    2017-07-31

    The primary goal of this research was to use large streams of truck-GPS data to analyze travel routes (or paths) chosen by freight trucks to travel between different origin and destination (OD) location pairs in metropolitan regions of Florida. Two s...

  4. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  5. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  6. A working group`s conclusion on site specific flow and transport modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, J. [Golder Associates AB (Sweden); Ahokas, H. [Fintact Oy, Helsinki (Finland); Koskinen, L.; Poteri, A. [VTT Energy, Espoo (Finland); Niemi, A. [Royal Inst. of Technology, Stockholm (Sweden). Hydraulic Engineering; Hautojaervi, A. [Posiva Oy, Helsinki (Finland)

    1998-03-01

    This document suggests a strategy plan for groundwater flow and transport modelling to be used in the site specific performance assessment analysis of spent nuclear fuel disposal to be used for the site selection planned by the year 2000. Considering suggested general regulations in Finland, as well as suggested regulations in Sweden and the approach taken in recent safety assessment exercises conducted in these countries, it is clear that in such an analysis, in addition to showing that the proposed repository is safe, there exist needs to strengthen the link between field data, groundwater flow modelling and derivation of safety assessment parameters, and needs to assess uncertainty and variability. The suggested strategy plan builds on an evaluation of different approaches to modelling the groundwater flow in crystalline basement rock, the abundance of data collected in the site investigation programme in Finland, and the modelling methodology developed in the programme so far. It is suggested to model the whole system using nested models, where larger scale models provide the boundary conditions for the smaller ones 62 refs.

  7. Nonequilibrium Dynamics of Anisotropic Large Spins in the Kondo Regime: Time-Dependent Numerical Renormalization Group Analysis

    Science.gov (United States)

    Roosen, David; Wegewijs, Maarten R.; Hofstetter, Walter

    2008-02-01

    We investigate the time-dependent Kondo effect in a single-molecule magnet (SMM) strongly coupled to metallic electrodes. Describing the SMM by a Kondo model with large spin S>1/2, we analyze the underscreening of the local moment and the effect of anisotropy terms on the relaxation dynamics of the magnetization. Underscreening by single-channel Kondo processes leads to a logarithmically slow relaxation, while finite uniaxial anisotropy causes a saturation of the SMM’s magnetization. Additional transverse anisotropy terms induce quantum spin tunneling and a pseudospin-1/2 Kondo effect sensitive to the spin parity.

  8. A numerical shoreline model for shorelines with large curvature

    DEFF Research Database (Denmark)

    Kærgaard, Kasper Hauberg; Fredsøe, Jørgen

    2013-01-01

    orthogonal horizontal directions are used. The volume error in the sediment continuity equation which is thereby introduced is removed through an iterative procedure. The model treats the shoreline changes by computing the sediment transport in a 2D coastal area model, and then integrating the sediment...

  9. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our...

  10. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper

    2003-01-01

    Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...

  11. Misspecified poisson regression models for large-scale registry data

    DEFF Research Database (Denmark)

    Grøn, Randi; Gerds, Thomas A.; Andersen, Per K.

    2016-01-01

    working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods...

  12. Using Breakout Groups as an Active Learning Technique in a Large Undergraduate Nutrition Classroom at the University of Guelph

    Directory of Open Access Journals (Sweden)

    Genevieve Newton

    2012-12-01

    Full Text Available Breakout groups have been widely used under many different conditions, but the lack of published information related to their use in undergraduate settings highlights the need for research related to their use in this context. This paper describes a study investigating the use of breakout groups in undergraduate education as it specifically relates to teaching a large 4th year undergraduate Nutrition class in a physically constrained lecture space. In total, 220 students completed a midterm survey and 229 completed a final survey designed to measure student satisfaction. Survey results were further analyzed to measure relationships between student perception of breakout group effectiveness and (1 gender and (2 cumulative GPA. Results of both surveys revealed that over 85% of students either agreed or strongly agreed that using breakout groups enhanced their learning experience, with females showing a significantly greater level of satisfaction and higher final course grade than males. Although not stratified by gender, a consistent finding between surveys was a lower perception of breakout group effectiveness by students with a cumulative GPA above 90%. The majority of respondents felt that despite the awkward room space, the breakout groups were easy to create and participate in, which suggests that breakout groups can be successfully used in a large undergraduate classroom despite physical constraints. The findings of this work are relevant given the applicability of breakout groups to a wide range of disciplines, and the relative ease of integration into a traditional lecture format.Les enseignants ont recours aux petits groupes dans de nombreuses conditions différentes, cependant, le manque d’information publiée sur leur utilisation au premier cycle confirme la nécessité d’effectuer des recherches sur ce format dans ce contexte. Le présent article rend compte d’une étude portant sur l’utilisation des petits groupes au premier

  13. A wide-range model of two-group gross sections in the dynamics code HEXTRAN

    International Nuclear Information System (INIS)

    Kaloinen, E.; Peltonen, J.

    2002-01-01

    In dynamic analyses the thermal hydraulic conditions within the reactor core may have a large variation, which sets a special requirement on the modeling of cross sections. The standard model in the dynamics code HEXTRAN is the same as in the static design code HEXBU-3D/MODS. It is based on a linear and second order fitting of two-group cross sections on fuel and moderator temperature, moderator density and boron density. A new, wide-range model of cross sections developed in Fortum Nuclear Services for HEXBU-3D/MOD6 has been included as an option into HEXTRAN. In this model the nodal cross sections are constructed from seven state variables in a polynomial of more than 40 terms. Coefficients of the polynomial are created by a least squares fitting to the results of a large number of fuel assembly calculations. Depending on the choice of state variables for the spectrum calculations, the new cross section model is capable to cover local conditions from cold zero power to boiling at full power. The 5. dynamic benchmark problem of AER is analyzed with the new option and results are compared to calculations with the standard model of cross sections in HEXTRAN (Authors)

  14. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  15. Large deviations for Gaussian queues modelling communication networks

    CERN Document Server

    Mandjes, Michel

    2007-01-01

    Michel Mandjes, Centre for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands, and Professor, Faculty of Engineering, University of Twente. At CWI Mandjes is a senior researcher and Director of the Advanced Communications Network group.  He has published for 60 papers on queuing theory, networks, scheduling, and pricing of networks.

  16. Multiscale modeling of large deformations in 3-D polycrystals

    International Nuclear Information System (INIS)

    Lu Jing; Maniatty, Antoinette; Misiolek, Wojciech; Bandar, Alexander

    2004-01-01

    An approach for modeling 3-D polycrystals, linking to the macroscale, is presented. A Potts type model is used to generate a statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A macroscale model of a compression test is compared against an experimental compression test for an Al-Mg-Si alloy to determine various deformation paths at different locations in the samples. These deformation paths are then applied to the experimental grain structure using a scale-bridging technique. Preliminary results from this work will be presented and discussed

  17. A model for recovery kinetics of aluminum after large strain

    DEFF Research Database (Denmark)

    Yu, Tianbo; Hansen, Niels

    2012-01-01

    A model is suggested to analyze recovery kinetics of heavily deformed aluminum. The model is based on the hardness of isothermal annealed samples before recrystallization takes place, and it can be extrapolated to longer annealing times to factor out the recrystallization component of the hardness...... for conditions where recovery and recrystallization overlap. The model is applied to the isothermal recovery at temperatures between 140 and 220°C of commercial purity aluminum deformed to true strain 5.5. EBSD measurements have been carried out to detect the onset of discontinuous recrystallization. Furthermore...

  18. How Hyperarousal and Sleep Reactivity Are Represented in Different Adult Age Groups: Results from a Large Cohort Study on Insomnia.

    Science.gov (United States)

    Altena, Ellemarije; Chen, Ivy Y; Daviaux, Yannick; Ivers, Hans; Philip, Pierre; Morin, Charles M

    2017-04-14

    Hyperarousal is a 24-h state of elevated cognitive and physiological activation, and is a core feature of insomnia. The extent to which sleep quality is affected by stressful events-so-called sleep reactivity-is a vulnerability factor for developing insomnia. Given the increasing prevalence of insomnia with age, we aimed to investigate how hyperarousal and sleep reactivity were related to insomnia severity in different adult age groups. Data were derived from a large cohort study investigating the natural history of insomnia in a population-based sample ( n = 1693). Baseline data of the Arousal Predisposition Scale (APS) and Ford Insomnia Response to Stress Test (FIRST) were examined across age and sleep/insomnia subgroups: 25-35 ( n = 448), 35-45 ( n = 528), and 45-55 year olds ( n = 717); good sleepers ( n = 931), individuals with insomnia symptoms ( n = 450), and individuals with an insomnia syndrome ( n = 312). Results from factorial analyses of variance (ANOVA) showed that APS scores decreased with increasing age, but increased with more severe sleep problems. FIRST scores were not significantly different across age groups, but showed the same strong increase as a function of sleep problem severity. The findings indicate that though arousal predisposition and sleep reactivity increase with more severe sleep problems, only arousal decreases with age. How arousing events affect an individual during daytime thus decreases with age, but how this arousal disrupts sleep is equivalent across different adult age groups. The main implication of these findings is that treatment of insomnia could be adapted for different age groups and take into consideration vulnerability factors such as hyperarousal and stress reactivity.

  19. The ICRP task group respiratory tract model - an age-dependent dosimetric model for general application

    International Nuclear Information System (INIS)

    Bailey, M.R.; Birchall, A.

    1992-01-01

    The ICRP Task Group on Human Respiratory Tract Models for Radiological Protection has developed a revised dosimetric model for the respiratory tract. Papers outlining the model, and describing each aspect of it were presented at the Third International Workshop on Respiratory Tract Dosimetry (Albuquerque 1-3 July 1990), the Proceedings of which were recently published in Radiation Protection Dosimetry Volume 38 Nos 1-3 (1991). Since the model had not changed substantially since the Workshop at Albuquerque, only a summary of the paper presented at Schloss Elmau is included in these Proceedings. (author)

  20. Training Vocational Rehabilitation Counselors in Group Dynamics: A Psychoeducational Model.

    Science.gov (United States)

    Elliott, Timothy R.

    1990-01-01

    Describes a six-session psychoeducational program for training vocational rehabilitation counselors in group dynamics. Presents evaluation of program by counselors (N=15) in which leadership styles, conflict management, and typology of group tasks concepts were rated as most beneficial. (Author/ABL)

  1. School-Based Adolescent Groups: The Sail Model.

    Science.gov (United States)

    Thompson, John L.; And Others

    The manual outlines the processes, policies, and actual program implementation of one component of a Minnesota program for emotionally disturbed adolescents (Project SAIL): the development of school-based therapy/intervention groups. The characteristics of SAIL students are described, and some considerations involved in providing group services…

  2. Large-Scale Topic Detection and Language Model Adaptation

    National Research Council Canada - National Science Library

    Seymore, Kristie

    1997-01-01

    .... We have developed a language model adaptation scheme that takes apiece of text, chooses the most similar topic clusters from a set of over 5000 elemental topics, and uses topic specific language...

  3. A Large Scale, High Resolution Agent-Based Insurgency Model

    Science.gov (United States)

    2013-09-30

    CUDA) is NVIDIA Corporation’s software development model for General Purpose Programming on Graphics Processing Units (GPGPU) ( NVIDIA Corporation ...Conference. Argonne National Laboratory, Argonne, IL, October, 2005. NVIDIA Corporation . NVIDIA CUDA Programming Guide 2.0 [Online]. NVIDIA Corporation

  4. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  5. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  6. Balancing selfishness and norm conformity can explain human behavior in large-scale prisoner's dilemma games and can poise human groups near criticality

    Science.gov (United States)

    Realpe-Gómez, John; Andrighetto, Giulia; Nardin, Luis Gustavo; Montoya, Javier Antonio

    2018-04-01

    Cooperation is central to the success of human societies as it is crucial for overcoming some of the most pressing social challenges of our time; still, how human cooperation is achieved and may persist is a main puzzle in the social and biological sciences. Recently, scholars have recognized the importance of social norms as solutions to major local and large-scale collective action problems, from the management of water resources to the reduction of smoking in public places to the change in fertility practices. Yet a well-founded model of the effect of social norms on human cooperation is still lacking. Using statistical-physics techniques and integrating findings from cognitive and behavioral sciences, we present an analytically tractable model in which individuals base their decisions to cooperate both on the economic rewards they obtain and on the degree to which their action complies with social norms. Results from this parsimonious model are in agreement with observations in recent large-scale experiments with humans. We also find the phase diagram of the model and show that the experimental human group is poised near a critical point, a regime where recent work suggests living systems respond to changing external conditions in an efficient and coordinated manner.

  7. Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis.

    Science.gov (United States)

    Deetjen, Ulrike; Powell, John A

    2016-05-01

    This research examines the extent to which informational and emotional elements are employed in online support forums for 14 purposively sampled chronic medical conditions and the factors that influence whether posts are of a more informational or emotional nature. Large-scale qualitative data were obtained from Dailystrength.org. Based on a hand-coded training dataset, all posts were classified into informational or emotional using a Bayesian classification algorithm to generalize the findings. Posts that could not be classified with a probability of at least 75% were excluded. The overall tendency toward emotional posts differs by condition: mental health (depression, schizophrenia) and Alzheimer's disease consist of more emotional posts, while informational posts relate more to nonterminal physical conditions (irritable bowel syndrome, diabetes, asthma). There is no gender difference across conditions, although prostate cancer forums are oriented toward informational support, whereas breast cancer forums rather feature emotional support. Across diseases, the best predictors for emotional content are lower age and a higher number of overall posts by the support group member. The results are in line with previous empirical research and unify empirical findings from single/2-condition research. Limitations include the analytical restriction to predefined categories (informational, emotional) through the chosen machine-learning approach. Our findings provide an empirical foundation for building theory on informational versus emotional support across conditions, give insights for practitioners to better understand the role of online support groups for different patients, and show the usefulness of machine-learning approaches to analyze large-scale qualitative health data from online settings. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  9. Discrete time duration models with group-level heterogeneity

    DEFF Research Database (Denmark)

    Frederiksen, Anders; Honoré, Bo; Hu, Loujia

    2007-01-01

    Dynamic discrete choice panel data models have received a great deal of attention. In those models, the dynamics is usually handled by including the lagged outcome as an explanatory variable. In this paper we consider an alternative model in which the dynamics is handled by using the duration...

  10. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  11. Super enrichment of Fe-group nuclei in solar flares and their association with large 3He enrichments

    International Nuclear Information System (INIS)

    Anglin, J.D.; Dietrich, W.F.; Simpson, J.A.

    1977-01-01

    ''Fe''/He ratios at approximately 2 MeV/n have been measured in 60 solar flares and periods of enhanced fluxes during the interval 1972-1976. The observed ditribution of ratios is extremely wide with values ranging from approximately 1 to more than 1000 times the solar abundance ratio. In constrast, most of the CHO/He ratios for the same flares lie within a factor 2 of the observed mean value of 2 x 10 -2 . While experimental limitations prevent a complete correlation study of Fe-group and 3 He abundances, comparison of flares with large Fe enrichments with flares with large 3 He enrichments for the period 1969-1976 shows that a 3 He-rich flare is also likely to be rich in iron. We feel that the association of 3 He and Fe enrichments may be explained by a two-stage process in which a preliminary enrichment of heavy nuclei precedes the preferential acceleration of ambient 3 He. Nuclear interactions are ruled out as the principal source of the enriched 3 He. (author)

  12. Radiation Therapy for Primary Cutaneous Anaplastic Large Cell Lymphoma: An International Lymphoma Radiation Oncology Group Multi-institutional Experience

    Energy Technology Data Exchange (ETDEWEB)

    Million, Lynn, E-mail: lmillion@stanford.edu [Stanford Cancer Institute, Stanford, California (United States); Yi, Esther J.; Wu, Frank; Von Eyben, Rie [Stanford Cancer Institute, Stanford, California (United States); Campbell, Belinda A. [Department of Radiation Oncology and Cancer Imaging, Peter MacCallum Cancer Centre, East Melbourne (Australia); Dabaja, Bouthaina [The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Tsang, Richard W. [Department of Radiation Oncology, Princess Margaret Cancer Centre, Toronto, Ontario (Canada); Ng, Andrea [Department of Radiation Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts (United States); Wilson, Lynn D. [Department of Therapeutic Radiology/Radiation Oncology, Yale School of Medicine, Yale Cancer Center, New Haven, Connecticut (United States); Ricardi, Umberto [Department of Oncology, University of Turin, Turin (Italy); Kirova, Youlia [Institut Curie, Paris (France); Hoppe, Richard T. [Stanford Cancer Institute, Stanford, California (United States)

    2016-08-01

    Purpose: To collect response rates of primary cutaneous anaplastic large cell lymphoma, a rare cutaneous T-cell lymphoma, to radiation therapy (RT), and to determine potential prognostic factors predictive of outcome. Methods and Materials: The study was a retrospective analysis of patients with primary cutaneous anaplastic large cell lymphoma who received RT as primary therapy or after surgical excision. Data collected include initial stage of disease, RT modality (electron/photon), total dose, fractionation, response to treatment, and local recurrence. Radiation therapy was delivered at 8 participating International Lymphoma Radiation Oncology Group institutions worldwide. Results: Fifty-six patients met the eligibility criteria, and 63 tumors were treated: head and neck (27%), trunk (14%), upper extremities (27%), and lower extremities (32%). Median tumor size was 2.25 cm (range, 0.6-12 cm). T classification included T1, 40 patients (71%); T2, 12 patients (21%); and T3, 4 patients (7%). The median radiation dose was 35 Gy (range, 6-45 Gy). Complete clinical response (CCR) was achieved in 60 of 63 tumors (95%) and partial response in 3 tumors (5%). After CCR, 1 tumor recurred locally (1.7%) after 36 Gy and 7 months after RT. This was the only patient to die of disease. Conclusions: Primary cutaneous anaplastic large cell lymphoma is a rare, indolent cutaneous lymphoma with a low death rate. This analysis, which was restricted to patients selected for treatment with radiation, indicates that achieving CCR was independent of radiation dose. Because there were too few failures (<2%) for statistical analysis on dose response, 30 Gy seems to be adequate for local control, and even lower doses may suffice.

  13. Renormalization-group theory for the eddy viscosity in subgrid modeling

    Science.gov (United States)

    Zhou, YE; Vahala, George; Hossain, Murshed

    1988-01-01

    Renormalization-group theory is applied to incompressible three-dimensional Navier-Stokes turbulence so as to eliminate unresolvable small scales. The renormalized Navier-Stokes equation now includes a triple nonlinearity with the eddy viscosity exhibiting a mild cusp behavior, in qualitative agreement with the test-field model results of Kraichnan. For the cusp behavior to arise, not only is the triple nonlinearity necessary but the effects of pressure must be incorporated in the triple term. The renormalized eddy viscosity will not exhibit a cusp behavior if it is assumed that a spectral gap exists between the large and small scales.

  14. Renormalization group study of the one-dimensional quantum Potts model

    International Nuclear Information System (INIS)

    Solyom, J.; Pfeuty, P.

    1981-01-01

    The phase transition of the classical two-dimensional Potts model, in particular the order of the transition as the number of components q increases, is studied by constructing renormalization group transformations on the equivalent one-dimensional quatum problem. It is shown that the block transformation with two sites per cell indicates the existence of a critical qsub(c) separating the small q and large q regions with different critical behaviours. The physically accessible fixed point for q>qsub(c) is a discontinuity fixed point where the specific heat exponent α=1 and therefore the transition is of first order. (author)

  15. Description of the East Brazil Large Marine Ecosystem using a trophic model

    Directory of Open Access Journals (Sweden)

    Kátia M.F. Freire

    2008-09-01

    Full Text Available The objective of this study was to describe the marine ecosystem off northeastern Brazil. A trophic model was constructed for the 1970s using Ecopath with Ecosim. The impact of most of the forty-one functional groups was modest, probably due to the highly reticulated diet matrix. However, seagrass and macroalgae exerted a strong positive impact on manatee and herbivorous reef fishes, respectively. A high negative impact of omnivorous reef fishes on spiny lobsters and of sharks on swordfish was observed. Spiny lobsters and swordfish had the largest biomass changes for the simulation period (1978-2000; tunas, other large pelagics and sharks showed intermediate rates of biomass decline; and a slight increase in biomass was observed for toothed cetaceans, large carnivorous reef fishes, and dolphinfish. Recycling was an important feature of this ecosystem with low phytoplankton-originated primary production. The mean transfer efficiency between trophic levels was 11.4%. The gross efficiency of the fisheries was very low (0.00002, probably due to the low exploitation rate of most of the resources in the 1970s. Basic local information was missing for many groups. When information gaps are filled, this model may serve more credibly for the exploration of fishing policies for this area within an ecosystem approach.

  16. CFD modeling of pool swell during large break LOCA

    International Nuclear Information System (INIS)

    Yan, Jin; Bolger, Francis; Li, Guangjun; Mintz, Saul; Pappone, Daniel

    2009-01-01

    GE had conducted a series of one-third scale three-vent air tests in support the horizontal vent pressure suppression system used in Mark III containment design for General Electric BWR plants. During the test, the air-water interface has been tracked by conductivity probes. There are many pressure monitors inside the test rig. The purpose of the test was to provide a basis for the pool swell load definition for the Mark III containment. In this paper, a transient 3-Dimensional CFD model of the one-third scale Mark III suppression pool swell process is constructed. The Volume of Fluid (VOF) multiphase model is used to explicitly track the interface between the water liquid and the air. The CFD results such as flow velocity, pressure, interface locations are compared to those from the test. Through the comparisons, a technical approach to numerically model the pool swell phenomenon is established and benchmarked. (author)

  17. Modeling and simulation of large scale stirred tank

    Science.gov (United States)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  18. Simplified local density model for adsorption over large pressure ranges

    International Nuclear Information System (INIS)

    Rangarajan, B.; Lira, C.T.; Subramanian, R.

    1995-01-01

    Physical adsorption of high-pressure fluids onto solids is of interest in the transportation and storage of fuel and radioactive gases; the separation and purification of lower hydrocarbons; solid-phase extractions; adsorbent regenerations using supercritical fluids; supercritical fluid chromatography; and critical point drying. A mean-field model is developed that superimposes the fluid-solid potential on a fluid equation of state to predict adsorption on a flat wall from vapor, liquid, and supercritical phases. A van der Waals-type equation of state is used to represent the fluid phase, and is simplified with a local density approximation for calculating the configurational energy of the inhomogeneous fluid. The simplified local density approximation makes the model tractable for routine calculations over wide pressure ranges. The model is capable of prediction of Type 2 and 3 subcritical isotherms for adsorption on a flat wall, and shows the characteristic cusplike behavior and crossovers seen experimentally near the fluid critical point

  19. REPORT ON THE MODELING OF THE LARGE MIS CANS

    International Nuclear Information System (INIS)

    MOODY, E.; LYMAN, J.; VEIRS, K.

    2000-01-01

    Changes in gas composition and gas pressure for closed systems containing plutonium dioxide and water are studied using a model that incorporates both radiolysis and chemical reactions. The model is used to investigate the behavior of material stored in storage containers conforming to DOE-STD-3013-99 storage standard. Scaling of the container to allow use of smaller amounts of nuclear material in experiments designed to bound the behavior of all material destined for long-term storage is studied. It is found that the container volume must be scaled along with the amount of material to achieve applicable results

  20. Modeling and analysis of a large deployable antenna structure

    Science.gov (United States)

    Chu, Zhengrong; Deng, Zongquan; Qi, Xiaozhi; Li, Bing

    2014-02-01

    One kind of large deployable antenna (LDA) structure is proposed by combining a number of basic deployable units in this paper. In order to avoid vibration caused by fast deployment speed of the mechanism, a braking system is used to control the spring-actuated system. Comparisons between the LDA structure and a similar structure used by the large deployable reflector (LDR) indicate that the former has potential for use in antennas with up to 30 m aperture due to its lighter weight. The LDA structure is designed to form a spherical surface found by the least square fitting method so that it can be symmetrical. In this case, the positions of the terminal points in the structure are determined by two principles. A method to calculate the cable network stretched on the LDA structure is developed, which combines the original force density method and the parabolic surface constraint. Genetic algorithm is applied to ensure that each cable reaches a desired tension, which avoids the non-convergence issue effectively. We find that the pattern for the front and rear cable net must be the same when finding the shape of the rear cable net, otherwise anticlastic surface would generate.

  1. Glucocorticoid induced osteopenia in cancellous bone of sheep: validation of large animal model for spine fusion and biomaterial research

    DEFF Research Database (Denmark)

    Ding, Ming; Cheng, Liming; Bollen, Peter

    2010-01-01

    STUDY DESIGN: Glucocorticoid with low calcium and phosphorus intake induces osteopenia in cancellous bone of sheep. OBJECTIVE: To validate a large animal model for spine fusion and biomaterial research. SUMMARY OF BACKGROUND DATA: A variety of ovariectomized animals has been used to study...... osteoporosis. Most experimental spine fusions were based on normal animals, and there is a great need for suitable large animal models with adequate bone size that closely resemble osteoporosis in humans. METHODS: Eighteen female skeletal mature sheep were randomly allocated into 3 groups, 6 each. Group 1 (GC......-1) received prednisolone (GC) treatment (0.60 mg/kg/day, 5 times weekly) for 7 months. Group 2 (GC-2) received the same treatment as GC-1 for 7 months followed by 3 months without treatment. Group 3 was left untreated and served as the controls. All sheep received restricted diet with low calcium...

  2. METHODOLOGY & CALCULATIONS FOR THE ASSIGNMENT OF WASTE GROUPS FOR THE LARGE UNDERGROUND WASTE STORAGE TANKS AT THE HANFORD SITE

    Energy Technology Data Exchange (ETDEWEB)

    BARKER, S.A.

    2006-07-27

    Waste stored within tank farm double-shell tanks (DST) and single-shell tanks (SST) generates flammable gas (principally hydrogen) to varying degrees depending on the type, amount, geometry, and condition of the waste. The waste generates hydrogen through the radiolysis of water and organic compounds, thermolytic decomposition of organic compounds, and corrosion of a tank's carbon steel walls. Radiolysis and thermolytic decomposition also generates ammonia. Nonflammable gases, which act as dilutents (such as nitrous oxide), are also produced. Additional flammable gases (e.g., methane) are generated by chemical reactions between various degradation products of organic chemicals present in the tanks. Volatile and semi-volatile organic chemicals in tanks also produce organic vapors. The generated gases in tank waste are either released continuously to the tank headspace or are retained in the waste matrix. Retained gas may be released in a spontaneous or induced gas release event (GRE) that can significantly increase the flammable gas concentration in the tank headspace as described in RPP-7771. The document categorizes each of the large waste storage tanks into one of several categories based on each tank's waste characteristics. These waste group assignments reflect a tank's propensity to retain a significant volume of flammable gases and the potential of the waste to release retained gas by a buoyant displacement event. Revision 5 is the annual update of the methodology and calculations of the flammable gas Waste Groups for DSTs and SSTs.

  3. Searches for phenomena beyond the Standard Model at the Large

    Indian Academy of Sciences (India)

    The LHC has delivered several fb-1 of data in spring and summer 2011, opening new windows of opportunity for discovering phenomena beyond the Standard Model. A summary of the searches conducted by the ATLAS and CMS experiments based on about 1 fb-1 of data is presented.

  4. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  5. Large-area dry bean yield prediction modeling in Mexico

    Science.gov (United States)

    Given the importance of dry bean in Mexico, crop yield predictions before harvest are valuable for authorities of the agricultural sector, in order to define support for producers. The aim of this study was to develop an empirical model to estimate the yield of dry bean at the regional level prior t...

  6. Soil carbon management in large-scale Earth system modelling

    DEFF Research Database (Denmark)

    Olin, S.; Lindeskog, M.; Pugh, T. A. M.

    2015-01-01

    , carbon sequestration and nitrogen leaching from croplands are evaluated and discussed. Compared to the version of LPJ-GUESS that does not include land-use dynamics, estimates of soil carbon stocks and nitrogen leaching from terrestrial to aquatic ecosystems were improved. Our model experiments allow us...

  7. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Airflow and radon transport modeling in four large buildings

    International Nuclear Information System (INIS)

    Fan, J.B.; Persily, A.K.

    1995-01-01

    Computer simulations of multizone airflow and contaminant transport were performed in four large buildings using the program CONTAM88. This paper describes the physical characteristics of the buildings and their idealizations as multizone building airflow systems. These buildings include a twelve-story multifamily residential building, a five-story mechanically ventilated office building with an atrium, a seven-story mechanically ventilated office building with an underground parking garage, and a one-story school building. The air change rates and interzonal airflows of these buildings are predicted for a range of wind speeds, indoor-outdoor temperature differences, and percentages of outdoor air intake in the supply air Simulations of radon transport were also performed in the buildings to investigate the effects of indoor-outdoor temperature difference and wind speed on indoor radon concentrations

  9. On-line core monitoring system based on buckling corrected modified one group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.

    2011-01-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  10. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  11. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  12. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  13. A comparison of updating algorithms for large N reduced models

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)

    2015-06-29

    We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  14. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  15. The Waterfall Model in Large-Scale Development

    Science.gov (United States)

    Petersen, Kai; Wohlin, Claes; Baca, Dejan

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.

  16. The waterfall model in large-scale development

    OpenAIRE

    Petersen, Kai; Wohlin, Claes; Baca, Dejan

    2009-01-01

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature wit...

  17. The magnetic model of the large hadron collider

    CERN Document Server

    Auchmann, B; Buzio, M; Deniau, L; Fiscarelli, L; Giovannozzi, M; Hagen, P; Lamont, M; Montenero, G; Mueller, G; Pereira, M; Redaelli, S; Remondino, V; Schmidt, F; Steinhagen, R; Strzelczyk, M; Tomas Garcia, R; Todesco, E; Delsolaro, W Venturini; Walckiers, L; Wenninger, J; Wolf, R; Zimmermann, F

    2010-01-01

    The beam commissioning carried out in 2009 has proved that we have a pretty good understanding of the behaviour of the relation field-current in the LHC magnets and of its reproducibility. In this paper we summarize the main issues of beam commissioning as far as the magnetic model is concerned. An outline of what can be expected in 2010, when the LHC will be pushed to 3.5 TeV, is also given.

  18. Validity of scale modeling for large deformations in shipping containers

    International Nuclear Information System (INIS)

    Burian, R.J.; Black, W.E.; Lawrence, A.A.; Balmert, M.E.

    1979-01-01

    The principal overall objective of this phase of the continuing program for DOE/ECT is to evaluate the validity of applying scaling relationships to accurately assess the response of unprotected model shipping containers severe impact conditions -- specifically free fall from heights up to 140 ft onto a hard surface in several orientations considered most likely to produce severe damage to the containers. The objective was achieved by studying the following with three sizes of model casks subjected to the various impact conditions: (1) impact rebound response of the containers; (2) structural damage and deformation modes; (3) effect on the containment; (4) changes in shielding effectiveness; (5) approximate free-fall threshold height for various orientations at which excessive damage occurs; (6) the impact orientation(s) that tend to produce the most severe damage; and (7) vunerable aspects of the casks which should be examined. To meet the objective, the tests were intentionally designed to produce extreme structural damage to the cask models. In addition to the principal objective, this phase of the program had the secondary objectives of establishing a scientific data base for assessing the safety and environmental control provided by DOE nuclear shipping containers under impact conditions, and providing experimental data for verification and correlation with dynamic-structural-analysis computer codes being developed by the Los Alamos Scientific Laboratory for DOE/ECT

  19. Implementation of an Online Chemistry Model to a Large Eddy Simulation Model (PALM-4U0

    Science.gov (United States)

    Mauder, M.; Khan, B.; Forkel, R.; Banzhaf, S.; Russo, E. E.; Sühring, M.; Kanani-Sühring, F.; Raasch, S.; Ketelsen, K.

    2017-12-01

    Large Eddy Simulation (LES) models permit to resolve relevant scales of turbulent motion, so that these models can capture the inherent unsteadiness of atmospheric turbulence. However, LES models are so far hardly applied for urban air quality studies, in particular chemical transformation of pollutants. In this context, BMBF (Bundesministerium für Bildung und Forschung) funded a joint project, MOSAIK (Modellbasierte Stadtplanung und Anwendung im Klimawandel / Model-based city planning and application in climate change) with the main goal to develop a new highly efficient urban climate model (UCM) that also includes atmospheric chemical processes. The state-of-the-art LES model PALM; Maronga et al, 2015, Geosci. Model Dev., 8, doi:10.5194/gmd-8-2515-2015), has been used as a core model for the new UCM named as PALM-4U. For the gas phase chemistry, a fully coupled 'online' chemistry model has been implemented into PALM. The latest version of the Kinetic PreProcessor (KPP) Version 2.3, has been utilized for the numerical integration of chemical species. Due to the high computational demands of the LES model, compromises in the description of chemical processes are required. Therefore, a reduced chemistry mechanism, which includes only major pollutants namely O3, NO, NO2, CO, a highly simplified VOC chemistry and a small number of products have been implemented. This work shows preliminary results of the advection, and chemical transformation of atmospheric pollutants. Non-cyclic boundaries have been used for inflow and outflow in east-west directions while periodic boundary conditions have been implemented to the south-north lateral boundaries. For practical applications, our approach is to go beyond the simulation of single street canyons to chemical transformation, advection and deposition of air pollutants in the larger urban canopy. Tests of chemistry schemes and initial studies of chemistry-turbulence, transport and transformations are presented.

  20. A model-based approach to operational event groups ranking

    Energy Technology Data Exchange (ETDEWEB)

    Simic, Zdenko [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; Maqua, Michael [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Wattrelos, Didier [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France)

    2014-04-15

    The operational experience (OE) feedback provides improvements in all industrial activities. Identification of the most important and valuable groups of events within accumulated experience is important in order to focus on a detailed investigation of events. The paper describes the new ranking method and compares it with three others. Methods have been described and applied to OE events utilised by nuclear power plants in France and Germany for twenty years. The results show that different ranking methods only roughly agree on which of the event groups are the most important ones. In the new ranking method the analytical hierarchy process is applied in order to assure consistent and comprehensive weighting determination for ranking indexes. The proposed method allows a transparent and flexible event groups ranking and identification of the most important OE for further more detailed investigation in order to complete the feedback. (orig.)

  1. A mathematical model for stratification of LADA risk groups

    Directory of Open Access Journals (Sweden)

    Tatiana Mikhailovna Tikhonova

    2014-03-01

    Full Text Available Aim. To stratify risk groups via discriminant analysis based on the most clinically relevant indications of LADA onset derived from medical history. Materials and Methods. Present study included 141 patients with diabetes mellitus (DM of whom 65 had preliminary diagnosis of LADA, 40 patients were diagnosed with type 1 diabetes mellitus (T1DM and 36 ? with type 2 diabetes mellitus (T2DM. Discriminant analysis was performed to evaluate the differences between the clinical onsets in study groups. Results. Aside from torpid onset with early evidence for insulin resistance, clinical characteristics of LADA included diagnosis during random examination, progressive loss of body mass, hyperglycemia greater than 14 mmol/L at the diagnosis and, possibly, ketonuria without history of acute ketoacidosis. Conclusion. Discriminant analysis is beneficial in stratifying risk groups for the development of LADA.

  2. Large animals as potential models of human mental and behavioral disorders.

    Science.gov (United States)

    Danek, Michał; Danek, Janusz; Araszkiewicz, Aleksander

    2017-12-30

    Many animal models in different species have been developed for mental and behavioral disorders. This review presents large animals (dog, ovine, swine, horse) as potential models of this disorders. The article was based on the researches that were published in the peer-reviewed journals. Aliterature research was carried out using the PubMed database. The above issues were discussed in the several problem groups in accordance with the WHO International Statistical Classification of Diseases and Related Health Problems 10thRevision (ICD-10), in particular regarding: organic, including symptomatic, disorders; mental disorders (Alzheimer's disease and Huntington's disease, pernicious anemia and hepatic encephalopathy, epilepsy, Parkinson's disease, Creutzfeldt-Jakob disease); behavioral disorders due to psychoactive substance use (alcoholic intoxication, abuse of morphine); schizophrenia and other schizotypal disorders (puerperal psychosis); mood (affective) disorders (depressive episode); neurotic, stress-related and somatoform disorders (posttraumatic stress disorder, obsessive-compulsive disorder); behavioral syndromes associated with physiological disturbances and physical factors (anxiety disorders, anorexia nervosa, narcolepsy); mental retardation (Cohen syndrome, Down syndrome, Hunter syndrome); behavioral and emotional disorders (attention deficit hyperactivity disorder). This data indicates many large animal disorders which can be models to examine the above human mental and behavioral disorders.

  3. Modelling and transient stability of large wind farms

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans; Nielsen, Arne Hejde

    2003-01-01

    by a physical model of grid-connected windmills. The windmill generators ate conventional induction generators and the wind farm is ac-connected to the power system. Improvements-of short-term voltage stability in case of failure events in the external power system are treated with use of conventional generator...... technology. This subject is treated as a parameter study with respect to the windmill electrical and mechanical parameters and with use of control strategies within the conventional generator technology. Stability improvements on the wind farm side of the connection point lead to significant reduction...

  4. Large meteorite impacts: The K/T model

    Science.gov (United States)

    Bohor, B. F.

    1992-01-01

    The Cretaceous/Tertiary (K/T) boundary event represents probably the largest meteorite impact known on Earth. It is the only impact event conclusively linked to a worldwide mass extinction, a reflection of its gigantic scale and global influence. Until recently, the impact crater was not definitively located and only the distal ejecta of this impact was available for study. However, detailed investigations of this ejecta's mineralogy, geochemistry, microstratigraphy, and textures have allowed its modes of ejection and dispersal to be modeled without benefit of a source crater of known size and location.

  5. Numerical Model for Solidification Zones Selection in the Large Ingots

    Directory of Open Access Journals (Sweden)

    Wołczyński W.

    2015-12-01

    Full Text Available A vertical cut at the mid-depth of the 15-ton forging steel ingot has been performed by curtesy of the CELSA - Huta Ostrowiec plant. Some metallographic studies were able to reveal not only the chilled undersized grains under the ingot surface but columnar grains and large equiaxed grains as well. Additionally, the structural zone within which the competition between columnar and equiaxed structure formation was confirmed by metallography study, was also revealed. Therefore, it seemed justified to reproduce some of the observed structural zones by means of numerical calculation of the temperature field. The formation of the chilled grains zone is the result of unconstrained rapid solidification and was not subject of simulation. Contrary to the equiaxed structure formation, the columnar structure or columnar branched structure formation occurs under steep thermal gradient. Thus, the performed simulation is able to separate both discussed structural zones and indicate their localization along the ingot radius as well as their appearance in term of solidification time.

  6. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  7. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  8. A Model for Establishing an Astronomy Education Discussion Group

    Science.gov (United States)

    Deming, Grace; Hayes-Gehrke, M.; Zauderer, B. A.; Bovill, M. S.; DeCesar, M.

    2010-01-01

    In October 2005, a group of astronomy faculty and graduate students met to establish departmental support for participants in the UM Center for Teaching Excellence University Teaching and Learning Program. This program seeks to increase graduate students’ understanding of effective teaching methods, awareness of student learning, and appreciation of education as a scholarly pursuit. Our group has facilitated the submission of successful graduate student educational development grant proposals to the Center for Teaching Excellence (CTE). Completion of the CTE program results in a notation on the graduate student's transcript. Our discussion group met monthly during the first two years. The Astronomy Education Review, The Physics Teacher, The Washington Post, The Chronicle of Higher Education, and National Research Council publications were used to provide background for discussion. Beginning in 2007, the group began sponsoring monthly astronomy education lunches during the academic year to which the entire department was invited. Over the past two years, speakers have included graduate students, faculty, and guests, such as Jay Labov from the National Research Council. Topics have included the Astronomy Diagnostic Test, intelligent design versus evolution, active learning techniques, introducing the use of lecture tutorials, using effective demonstrations, confronting student misconceptions, engagement through clickers (or cards), and fostering critical thinking with ranking tasks. The results of an informal evaluation will be presented.

  9. Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification.

    Science.gov (United States)

    Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell

    2008-07-01

    Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.

  10. Working Group 1: Software System Design and Implementation for Environmental Modeling

    Science.gov (United States)

    ISCMEM Working Group One Presentation, presentation with the purpose of fostering the exchange of information about environmental modeling tools, modeling frameworks, and environmental monitoring databases.

  11. A Renormalization Group Like Model for a Democratic Dictatorship

    Science.gov (United States)

    Galam, Serge

    2015-03-01

    We review a model of sociophysics which deals with democratic voting in bottom up hierarchical systems. The connection to the original physical model and technics are outlined underlining both the similarities and the differences. Emphasis is put on the numerous novel and counterintuitive results obtained with respect to the associated social and political framework. Using this model a real political event was successfully predicted with the victory of the French extreme right party in the 2000 first round of French presidential elections. The perspectives and the challenges to make sociophysics a predictive solid field of science are discussed.

  12. Working Group 2: A critical appraisal of model simulations

    International Nuclear Information System (INIS)

    MacCracken, M.; Cubasch, U.; Gates, W.L.; Harvey, L.D.; Hunt, B.; Katz, R.; Lorenz, E.; Manabe, S.; McAvaney, B.; McFarlane, N.; Meehl, G.; Meleshko, V.; Robock, A.; Stenchikov, G.; Stouffer, R.; Wang, W.C.; Washington, W.; Watts, R.; Zebiak, S.

    1990-01-01

    The complexity of the climate system and the absence of definitive analogs to the evolving climatic situation force use of theoretical models to project the future climatic influence of the relatively rapid and on-going increase in the atmospheric concentrations of CO 2 and other trace gases. A wide variety of climate models has been developed to look at particular aspects of the problem and to vary the mix of complexity and resource requirements needed to study various aspects of the problem; all such models have contributed to their insights of the problem

  13. Summary of the working group on modelling and simulation

    International Nuclear Information System (INIS)

    Schachinger, L.

    1991-11-01

    The discussions and presentations in the Simulations and Modelling subgroup of the Fifth ICFA Beam Dynamics Workshop ''The Effects of Errors in Accelerators'' are summarized. The workshop was held on October 3--8, 1991 in Corpus Christi, Texas

  14. NST: Thermal Modeling for a Large Aperture Solar Telescope

    Science.gov (United States)

    Coulter, Roy

    2011-05-01

    Late in the 1990s the Dutch Open Telescope demonstrated that internal seeing in open, large aperture solar telescopes can be controlled by flushing air across the primary mirror and other telescope structures exposed to sunlight. In that system natural wind provides a uniform air temperature throughout the imaging volume, while efficiently sweeping heated air away from the optics and mechanical structure. Big Bear Solar Observatory's New Solar Telescope (NST) was designed to realize that same performance in an enclosed system by using both natural wind through the dome and forced air circulation around the primary mirror to provide the uniform air temperatures required within the telescope volume. The NST is housed in a conventional, ventilated dome with a circular opening, in place of the standard dome slit, that allows sunlight to fall only on an aperture stop and the primary mirror. The primary mirror is housed deep inside a cylindrical cell with only minimal openings in the side at the level of the mirror. To date, the forced air and cooling systems designed for the NST primary mirror have not been implemented, yet the telescope regularly produces solar images indicative of the absence of mirror seeing. Computational Fluid Dynamics (CFD) analysis of the NST primary mirror system along with measurements of air flows within the dome, around the telescope structure, and internal to the mirror cell are used to explain the origin of this seemingly incongruent result. The CFD analysis is also extended to hypothetical systems of various scales. We will discuss the results of these investigations.

  15. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments

    Directory of Open Access Journals (Sweden)

    Jing Mi

    2016-09-01

    Full Text Available Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model.

  16. A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.

    Science.gov (United States)

    Mi, Jing; Colburn, H Steven

    2016-10-03

    Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.

  17. A logistics model for large space power systems

    Science.gov (United States)

    Koelle, H. H.

    Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility

  18. Risk assessment does not explain high prevalence of gestational diabetes mellitus in a large group of Sardinian women

    Directory of Open Access Journals (Sweden)

    Zedda Pierina

    2008-07-01

    Full Text Available Abstract Background A very high prevalence (22.3% of gestational diabetes mellitus (GDM was recently reported following our study on a large group of Sardinian women. In order to explain such a high prevalence we sought to characterise our obstetric population through the analysis of risk factors and their association with the development of GDM. Methods The prevalence of risk factors and their association with the development of GDM were evaluated in 1103 pregnancies (247 GDM and 856 control women. The association of risk factors with GDM was calculated according to logistic regression. Sensitivity and specificity of risk assessment strategy were also calculated. Results None of the risk factors evaluated showed an elevated frequency in our population. The high risk patients were 231 (20.9%. Factors with a stronger association with GDM development were obesity (OR 3.7, 95% CI 2.08–6.8, prior GDM (OR 3.1, 95% CI 1.69–5.69, and family history of Type 2 diabetes (OR 2.6, 95% CI 1.81–3.86. Only patients over 35 years of age were more represented in the GDM group (38.2% vs 22.6% in the non-GDM cases, P P Conclusion Such a high prevalence of GDM in our population does not seem to be related to the abnormal presence of some known risk factors, and appears in contrast with the prevalence of Type 2 diabetes in Sardinia. Further studies are needed to explain the cause such a high prevalence of GDM in Sardinia. The "average risk" definition is not adequate to predict GDM in our population.

  19. Models for physics of the very small and very large

    CERN Document Server

    Buckholtz, Thomas J

    2016-01-01

    This monograph tackles three challenges. First, show math that matches known elementary particles. Second, apply the math to match other known physics data. Third, predict future physics data The math features solutions to isotropic pairs of isotropic quantum harmonic oscillators. This monograph matches some solutions to known elementary particles. Matched properties include spin and types of interactions in which the particles partake Other solutions point to possible elementary particles This monograph applies the math and the extended particle list. Results narrow gaps between physics data and theory. Results pertain to elementary particles, astrophysics, and cosmology For example, this monograph predicts properties for beyond-the-Standard-Model elementary particles, proposes descriptions of dark matter and dark energy, provides new relationships between known physics constants, includes theory that dovetails with the ratio of dark matter to ordinary matter, includes math that dovetails with the number of ...

  20. Explorations in combining cognitive models of individuals and system dynamics models of groups.

    Energy Technology Data Exchange (ETDEWEB)

    Backus, George A.

    2008-07-01

    This report documents a demonstration model of interacting insurgent leadership, military leadership, government leadership, and societal dynamics under a variety of interventions. The primary focus of the work is the portrayal of a token societal model that responds to leadership activities. The model also includes a linkage between leadership and society that implicitly represents the leadership subordinates as they directly interact with the population. The societal model is meant to demonstrate the efficacy and viability of using System Dynamics (SD) methods to simulate populations and that these can then connect to cognitive models depicting individuals. SD models typically focus on average behavior and thus have limited applicability to describe small groups or individuals. On the other hand, cognitive models readily describe individual behavior but can become cumbersome when used to describe populations. Realistic security situations are invariably a mix of individual and population dynamics. Therefore, the ability to tie SD models to cognitive models provides a critical capability that would be otherwise be unavailable.

  1. Large order asymptotics and convergent perturbation theory for critical indices of the φ4 model in 4 - ε expansion

    International Nuclear Information System (INIS)

    Honkonen, J.; Komarova, M.; Nalimov, M.

    2002-01-01

    Large order asymptotic behaviour of renormalization constants in the minimal subtraction scheme for the φ 4 (4 - ε) theory is discussed. Well-known results of the asymptotic 4 - ε expansion of critical indices are shown to be far from the large order asymptotic value. A convergent series for the model φ 4 (4 - ε) is then considered. Radius of convergence of the series for Green functions and for renormalisation group functions is studied. The results of the convergent expansion of critical indices in the 4 - ε scheme are revalued using the knowledge of large order asymptotics. Specific features of this procedure are discussed (Authors)

  2. A model for group counseling with male pedophiles.

    Science.gov (United States)

    van Zessen, G

    1990-01-01

    Group treatment programs for pedophiles are often designed for populations of convicted men in closed institutions with limited application to other populations. Treatment is usually focused on reducing the "deviant" sexual arousal and/or acquiring heterosocial skills and eventually establishing the ability to engage in adult heterosexual relationships. A six-week, highly structured program is presented to five men in a non-residential setting. In addition to individual psychotherapy, group counseling is offered. Male pedophiles are trained to talk effectively about common problems surrounding man-boy relationships. Counseling is based on the notion that the emotional, erotic and sexual attraction to boys per se does not need to be legitimized or modified. The attraction, however, can be a source of psychological and social problems that can be handled by using a social support system. Social support for pedophile problems can be obtained from and in interaction with other pedophiles.

  3. The Spanish human papillomavirus vaccine consensus group: a working model.

    Science.gov (United States)

    Cortés-Bordoy, Javier; Martinón-Torres, Federico

    2010-08-01

    Successful implementation of Human Papillomavirus (HPV) vaccine in each country can only be achieved from a complementary and synergistic perspective, integrating all the different points of view of the diverse related professionals. It is this context where the Spanish HPV Vaccine Consensus Group (Grupo Español de Consenso sobre la Vacuna VPH, GEC-VPH) was created. GEC-VPH philosophy, objectives and experience are reported in this article, with particular attention to the management of negative publicity and anti-vaccine groups. Initiatives as GEC-VPH--adapted to each country's particular idiosyncrasies--might help to overcome the existing barriers and to achieve wide and early implementation of HPV vaccination.

  4. A mathematical model for stratification of LADA risk groups

    OpenAIRE

    Tat'yana Mikhaylovna Tikhonova

    2014-01-01

    Aim. To stratify risk groups via discriminant analysis based on the most clinically relevant indications of LADA onset derived from medical history.Materials and Methods. Present study included 141 patients with diabetes mellitus (DM) of whom 65 had preliminary diagnosis of LADA, 40 patients were diagnosed with type 1 diabetes mellitus (T1DM) and 36 – with type 2 diabetes mellitus (T2DM). Discriminant analysis was performed to evaluate the differences between the clinical onsets in study grou...

  5. A polarization independent electromagnetically induced transparency-like metamaterial with large group delay and delay-bandwidth product

    Science.gov (United States)

    Bagci, Fulya; Akaoglu, Baris

    2018-05-01

    In this study, a classical analogue of electromagnetically induced transparency (EIT) that is completely independent of the polarization direction of the incident waves is numerically and experimentally demonstrated. The unit cell of the employed planar symmetric metamaterial structure consists of one square ring resonator and four split ring resonators (SRRs). Two different designs are implemented in order to achieve a narrow-band and wide-band EIT-like response. In the unit cell design, a square ring resonator is shown to serve as a bright resonator, whereas the SRRs behave as a quasi-dark resonator, for the narrow-band (0.55 GHz full-width at half-maximum bandwidth around 5 GHz) and wide-band (1.35 GHz full-width at half-maximum bandwidth around 5.7 GHz) EIT-like metamaterials. The observed EIT-like transmission phenomenon is theoretically explained by a coupled-oscillator model. Within the transmission window, steep changes of the phase result in high group delays and the delay-bandwidth products reach 0.45 for the wide-band EIT-like metamaterial. Furthermore, it has been demonstrated that the bandwidth and group delay of the EIT-like band can be controlled by changing the incidence angle of electromagnetic waves. These features enable the proposed metamaterials to achieve potential applications in filtering, switching, data storing, and sensing.

  6. The Achieving Success Everyday Group Counseling Model: Implications for Professional School Counselors

    Science.gov (United States)

    Steen, Sam; Henfield, Malik S.; Booker, Beverly

    2014-01-01

    This article presents the Achieving Success Everyday (ASE) group counseling model, which is designed to help school counselors integrate students' academic and personal-social development into their group work. We first describe this group model in detail and then offer one case example of a middle school counselor using the ASE model to conduct a…

  7. Fast three-dimensional core optimization based on modified one-group model

    Energy Technology Data Exchange (ETDEWEB)

    Freire, Fernando S. [ELETROBRAS Termonuclear S.A. - ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Dept. GCN-T], e-mail: freire@eletronuclear.gov.br; Martinez, Aquilino S.; Silva, Fernando C. da [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: aquilino@con.ufrj.br, e-mail: fernando@con.ufrj.br

    2009-07-01

    The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)

  8. Fast three-dimensional core optimization based on modified one-group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.; Martinez, Aquilino S.; Silva, Fernando C. da

    2009-01-01

    The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)

  9. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    Directory of Open Access Journals (Sweden)

    Mona Hassan Aburahma

    2015-07-01

    Full Text Available Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated.

  10. Cox's regression model for dynamics of grouped unemployment data

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2003-01-01

    Roč. 10, č. 19 (2003), s. 151-162 ISSN 1212-074X R&D Projects: GA ČR GA402/01/0539 Institutional research plan: CEZ:AV0Z1075907 Keywords : mathematical statistics * survival analysis * Cox's model Subject RIV: BB - Applied Statistics, Operational Research

  11. A Group-based Authorization Model for Cooperative Systems

    NARCIS (Netherlands)

    Sikkel, Nicolaas; Hughes, John A.

    Requirements for access control in CSCW systems have often been stated, but groupware in use today does not meet most of these requirements. There are practical reasons for this, but one of the problems is the inherent complexity of sophisticated access control models. We propose a general

  12. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  13. Coset models and D-branes in group manifolds

    International Nuclear Information System (INIS)

    Orlando, Domenico

    2006-01-01

    We conjecture the existence of a duality between heterotic closed strings on homogeneous spaces and symmetry-preserving D-branes on group manifolds, based on the observation about the coincidence of the low-energy field description for the two theories. For the closed string side we also give an explicit proof of a no-renormalization theorem as a consequence of a hidden symmetry and infer that the same property should hold true for the higher order terms of the dbi action

  14. International workshop of the Confinement Database and Modelling Expert Group in collaboration with the Edge and Pedestal Physics Expert Group

    International Nuclear Information System (INIS)

    Cordey, J.; Kardaun, O.

    2001-01-01

    A Workshop of the Confinement Database and Modelling Expert Group (EG) was held on 2-6 April at the Plasma Physics Research Center of Lausanne (CRPP), Switzerland. Presentations were held on the present status of the plasma pedestal (temperature and energy) scalings from an empirical and theoretical perspective. An integrated approach to modelling tokamaks incorporating core transport, edge pedestal and SOL, together with a model for ELMs was presented by JCT. New experimental data on on global H-mode confinement were discussed and presentations on L-H threshold power were made

  15. Black women, work, stress, and perceived discrimination: the focused support group model as an intervention for stress reduction.

    Science.gov (United States)

    Mays, V M

    1995-01-01

    This exploratory study examined the use of two components (small and large groups) of a community-based intervention, the Focused Support Group (FSG) model, to alleviate employment-related stressors in Black women. Participants were assigned to small groups based on occupational status. Groups met for five weekly 3-hr sessions in didactic or small- and large-group formats. Two evaluations following the didactic session and the small and large group sessions elicited information on satisfaction with each of the formats, self-reported change in stress, awareness of interpersonal and sociopolitical issues affecting Black women in the labor force, assessing support networks, and usefulness of specific discussion topics to stress reduction. Results indicated the usefulness of the small- and large-group formats in reduction of self-reported stress and increases in personal and professional sources of support. Discussions on race and sex discrimination in the workplace were effective in overall stress reduction. The study highlights labor force participation as a potential source of stress for Black women, and supports the development of culture- and gender-appropriate community interventions as viable and cost-effective methods for stress reduction.

  16. Kinks, chains, and loop groups in the CPn sigma models

    International Nuclear Information System (INIS)

    Harland, Derek

    2009-01-01

    We consider topological solitons in the CP n sigma models in two space dimensions. In particular, we study 'kinks', which are independent of one coordinate up to a rotation of the target space, and 'chains', which are periodic in one coordinate up to a rotation of the target space. Kinks and chains both exhibit constituents, similar to monopoles and calorons in SU(n) Yang-Mills-Higgs and Yang-Mills theories. We examine the constituent structure using Lie algebras.

  17. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    Science.gov (United States)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  18. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  19. A collision avoidance model for two-pedestrian groups: Considering random avoidance patterns

    Science.gov (United States)

    Zhou, Zhuping; Cai, Yifei; Ke, Ruimin; Yang, Jiwei

    2017-06-01

    Grouping is a common phenomenon in pedestrian crowds and group modeling is still an open challenging problem. When grouping pedestrians avoid each other, different patterns can be observed. Pedestrians can keep close with group members and avoid other groups in cluster. Also, they can avoid other groups separately. Considering this randomness in avoidance patterns, we propose a collision avoidance model for two-pedestrian groups. In our model, the avoidance model is proposed based on velocity obstacle method at first. Then grouping model is established using Distance constrained line (DCL), by transforming DCL into the framework of velocity obstacle, the avoidance model and grouping model are successfully put into one unified calculation structure. Within this structure, an algorithm is developed to solve the problem when solutions of the two models conflict with each other. Two groups of bidirectional pedestrian experiments are designed to verify the model. The accuracy of avoidance behavior and grouping behavior is validated in the microscopic level, while the lane formation phenomenon and fundamental diagrams is validated in the macroscopic level. The experiments results show our model is convincing and has a good expansibility to describe three or more pedestrian groups.

  20. Student perceptions of gamified audience response system interactions in large group lectures and via lecture capture technology.

    Science.gov (United States)

    Pettit, Robin K; McCoy, Lise; Kinney, Marjorie; Schwartz, Frederic N

    2015-05-22

    Higher education students have positive attitudes about the use of audience response systems (ARS), but even technology-enhanced lessons can become tiresome if the pedagogical approach is exactly the same with each implementation. Gamification is the notion that gaming mechanics can be applied to routine activities. In this study, TurningPoint (TP) ARS interactions were gamified and implemented in 22 large group medical microbiology lectures throughout an integrated year 1 osteopathic medical school curriculum. A 32-item questionnaire was used to measure students' perceptions of the gamified TP interactions at the end of their first year. The survey instrument generated both Likert scale and open-ended response data that addressed game design and variety, engagement and learning features, use of TP questions after class, and any value of lecture capture technology for reviewing these interactive presentations. The Chi Square Test was used to analyze grouped responses to Likert scale questions. Responses to open-ended prompts were categorized using open-coding. Ninety-one students out of 106 (86 %) responded to the survey. A significant majority of the respondents agreed or strongly agreed that the games were engaging, and an effective learning tool. The questionnaire investigated the degree to which specific features of these interactions were engaging (nine items) and promoted learning (seven items). The most highly ranked engagement aspects were peer competition and focus on the activity (tied for highest ranking), and the most highly ranked learning aspect was applying theoretical knowledge to clinical scenarios. Another notable item was the variety of interactions, which ranked in the top three in both the engagement and learning categories. Open-ended comments shed light on how students use TP questions for exam preparation, and revealed engaging and non-engaging attributes of these interactive sessions for students who review them via lecture capture

  1. Leader-based and self-organized communication: modelling group-mass recruitment in ants.

    Science.gov (United States)

    Collignon, Bertrand; Deneubourg, Jean Louis; Detrain, Claire

    2012-11-21

    For collective decisions to be made, the information acquired by experienced individuals about resources' location has to be shared with naïve individuals through recruitment. Here, we investigate the properties of collective responses arising from a leader-based recruitment and a self-organized communication by chemical trails. We develop a generalized model based on biological data drawn from Tetramorium caespitum ant species of which collective foraging relies on the coupling of group leading and trail recruitment. We show that for leader-based recruitment, small groups of recruits have to be guided in a very efficient way to allow a collective exploitation of food while large group requires less attention from their leader. In the case of self-organized recruitment through a chemical trail, a critical value of trail amount has to be laid per forager in order to launch collective food exploitation. Thereafter, ants can maintain collective foraging by emitting signal intensity below this threshold. Finally, we demonstrate how the coupling of both recruitment mechanisms may benefit to collectively foraging species. These theoretical results are then compared with experimental data from recruitment by T. caespitum ant colonies performing group-mass recruitment towards a single food source. We evidence the key role of leaders as initiators and catalysts of recruitment before this leader-based process is overtaken by self-organised communication through trails. This model brings new insights as well as a theoretical background to empirical studies about cooperative foraging in group-living species. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    frequent complication was a neurological deficit in the lower limbs. ConclusionIn pigs it’s possible to induce continuous expanding AAA’s based upon proteolytic degradation and pathological flow, resembling the real life dynamics of human aneurysms. Because the lumbars are preserved, it’s also a potential......BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs......, hereafter the pigs were euthanized for inspection and AAA wall sampling for histological analysis. ResultsIn group A, all pigs developed continuous expanding AAA’s with a mean increase in AP-diameter to 16.26 ± 0.93 mm equivalent to a 57% increase. In group B the AP-diameters increased to 11.33 ± 0.13 mm...

  3. On the renormalization group flow in two dimensional superconformal models

    International Nuclear Information System (INIS)

    Ahn, Changrim; Stanishkov, Marian

    2014-01-01

    We extend the results on the RG flow in the next to leading order to the case of the supersymmetric minimal models SM p for p≫1. We explain how to compute the NS and Ramond fields conformal blocks in the leading order in 1/p and follow the renormalization scheme proposed in [1]. As a result we obtained the anomalous dimensions of certain NS and Ramond fields. It turns out that the linear combination expressing the infrared limit of these fields in term of the IR theory SM p−2 is exactly the same as those of the nonsupersymmetric minimal theory

  4. Group Peer Mentoring: An Answer to the Faculty Mentoring Problem? A Successful Program at a Large Academic Department of Medicine.

    Science.gov (United States)

    Pololi, Linda H; Evans, Arthur T

    2015-01-01

    To address a dearth of mentoring and to avoid the pitfalls of dyadic mentoring, the authors implemented and evaluated a novel collaborative group peer mentoring program in a large academic department of medicine. The mentoring program aimed to facilitate faculty in their career planning, and targeted either early-career or midcareer faculty in 5 cohorts over 4 years, from 2010 to 2014. Each cohort of 9-12 faculty participated in a yearlong program with foundations in adult learning, relationship formation, mindfulness, and culture change. Participants convened for an entire day, once a month. Sessions incorporated facilitated stepwise and values-based career planning, skill development, and reflective practice. Early-career faculty participated in an integrated writing program and midcareer faculty in leadership development. Overall attendance of the 51 participants was 96%, and only 3 of 51 faculty who completed the program left the medical school during the 4 years. All faculty completed a written detailed structured academic development plan. Participants experienced an enhanced, inclusive, and appreciative culture; clarified their own career goals, values, strengths and priorities; enhanced their enthusiasm for collaboration; and developed skills. The program results highlight the need for faculty to personally experience the power of forming deep relationships with their peers for fostering successful career development and vitality. The outcomes of faculty humanity, vitality, professionalism, relationships, appreciation of diversity, and creativity are essential to the multiple missions of academic medicine. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  5. Left ventricular function during acute high-altitude exposure in a large group of healthy young Chinese men.

    Directory of Open Access Journals (Sweden)

    Mingyue Rao

    Full Text Available The purpose of this study was to observe left ventricular function during acute high-altitude exposure in a large group of healthy young males.A prospective trial was conducted in Szechwan and Tibet from June to August, 2012. By Doppler echocardiography, left ventricular function was examined in 139 healthy young Chinese men at sea level; within 24 hours after arrival in Lhasa, Tibet, at 3700 m; and on day 7 following an ascent to Yangbajing at 4400 m after 7 days of acclimatization at 3700 m. The resting oxygen saturation (SaO2, heart rate (HR and blood pressure (BP were also measured at the above mentioned three time points.Within 24 hours of arrival at 3700 m, the HR, ejection fraction (EF, fractional shortening (FS, stroke volume (SV, cardiac output (CO, and left ventricular (LV Tei index were significantly increased, but the LV end-systolic dimension (ESD, end-systolic volume (ESV, SaO2, E/A ratio, and ejection time (ET were significantly decreased compared to the baseline levels in all subjects. On day 7 at 4400 m, the SV and CO were significantly decreased; the EF and FS Tei were not decreased compared with the values at 3700 m; the HR was further elevated; and the SaO2, ESV, ESD, and ET were further reduced. Additionally, the E/A ratio was significantly increased on day 7 but was still lower than it was at low altitude.Upon acute high-altitude exposure, left ventricular systolic function was elevated with increased stroke volume, but diastolic function was decreased in healthy young males. With higher altitude exposure and prolonged acclimatization, the left ventricular systolic function was preserved with reduced stroke volume and improved diastolic function.

  6. Association of Stressful Life Events with Psychological Problems: A Large-Scale Community-Based Study Using Grouped Outcomes Latent Factor Regression with Latent Predictors

    Directory of Open Access Journals (Sweden)

    Akbar Hassanzadeh

    2017-01-01

    Full Text Available Objective. The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method. In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress, measured by Hospital Anxiety and Depression Scale (HADS and General Health Questionnaire (GHQ-12, as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs questionnaire, as the latent predictors. Results. The results showed that the personal stressors domain has significant positive association with psychological distress (β=0.19, anxiety (β=0.25, depression (β=0.15, and their collective profile score (β=0.20, with greater associations in females (β=0.28 than in males (β=0.13 (all P<0.001. In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P<0.001. Conclusion. Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems.

  7. Association of Stressful Life Events with Psychological Problems: A Large-Scale Community-Based Study Using Grouped Outcomes Latent Factor Regression with Latent Predictors

    Science.gov (United States)

    Hassanzadeh, Akbar; Heidari, Zahra; Hassanzadeh Keshteli, Ammar; Afshar, Hamid

    2017-01-01

    Objective The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress), measured by Hospital Anxiety and Depression Scale (HADS) and General Health Questionnaire (GHQ-12), as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs) questionnaire, as the latent predictors. Results The results showed that the personal stressors domain has significant positive association with psychological distress (β = 0.19), anxiety (β = 0.25), depression (β = 0.15), and their collective profile score (β = 0.20), with greater associations in females (β = 0.28) than in males (β = 0.13) (all P < 0.001). In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P < 0.001). Conclusion Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems. PMID:29312459

  8. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  9. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  10. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  11. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-07-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km{sup 2} horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km{sup 2}, with a coal-fired power plant emitting SO{sub 2}. Simulations were performed during three different periods when SO{sub 2} hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km{sup 2}, 0.5x0.5 km{sup 2}, and 0.2x0.2 km{sup 2}. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km{sup 2} resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with

  12. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto Rodriguez, J.A.; Saavedra, S.; Casares, J.J.

    2015-07-01

    CALMET/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km2 , with a coal-fired power plant emitting SO2. Simulations were performed during three different periods when SO2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km2 , 0.5x0.5 km2 , and 0.2x0.2 km2. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for CALMET model

  13. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    International Nuclear Information System (INIS)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-01-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km 2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km 2 , with a coal-fired power plant emitting SO 2 . Simulations were performed during three different periods when SO 2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km 2 , 0.5x0.5 km 2 , and 0.2x0.2 km 2 . The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km 2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for

  14. Public participation and marginalized groups: the community development model.

    Science.gov (United States)

    O'Keefe, Eileen; Hogg, Christine

    1999-12-01

    OBJECTIVES: To develop ways of reaching house-bound people and enabling them to give their views in planning and monitoring health and social care. STRATEGY: HealthLINK - a project based in a community health council - explored ways of involving older house-bound people in the London Borough of Camden, in planning and monitoring health and social care using community development techniques. RESULTS: HealthLINK set up an infrastructure to enable house-bound people to have access to information and to enable them to give their views. This resulted in access for health and local authorities to the views of house-bound older people and increased the self esteem and quality of life of those who became involved. CONCLUSIONS: Community development approaches that enable an infrastructure to be established may be an effective way of reaching marginalized communities. However, there are tensions in this approach between the different requirements for public involvement of statutory bodies and of users, and between representation of groups and listening to individual voices.

  15. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  16. Do 'school coaches' make a difference in school-based mental health promotion? Results from a large focus group study.

    Science.gov (United States)

    Corrieri, Sandro; Conrad, Ines; Riedel-Heller, Steffi G

    2014-12-01

    Mental disorders in children and adolescents are common and have serious consequences. Schools present a key opportunity to promote mental health and implement prevention measures. Four school coaches in five German schools were enlisted to engage students, teachers and parents in building a sustainably healthy school and classroom climate. Altogether, 58 focus groups with students (N=244), parents (N=54) and teachers (N=62) were conducted longitudinally. Topics included: (1) the development of the school and classroom climate, (2) the role of mental health in the regular curriculum, and (3) the role of school coaches in influencing these aspects. Over time, school coaches became trusted reference persons for an increasing number of school system members. They were able to positively influence the school and classroom climate by increasing the awareness of students, teachers and parents of mental health in daily routines. Nevertheless, topics like bullying and student inclusion remained an issue at follow-up. Overall, the school coach intervention is a good model for establishing the topic of mental health in everyday school life and increasing its importance. Future efforts will focus on building self-supporting structures and networks in order to make these efforts sustainable.

  17. Framing Negotiation: Dynamics of Epistemological and Positional Framing in Small Groups during Scientific Modeling

    Science.gov (United States)

    Shim, Soo-Yean; Kim, Heui-Baik

    2018-01-01

    In this study, we examined students' epistemological and positional framing during small group scientific modeling to explore their context-dependent perceptions about knowledge, themselves, and others. We focused on two small groups of Korean eighth-grade students who participated in six modeling activities about excretion. The two groups were…

  18. A friendly Maple module for one and two group reactor model

    International Nuclear Information System (INIS)

    Baptista, Camila O.; Pavan, Guilherme A.; Braga, Kelmo L.; Silva, Marcelo V.; Pereira, P.G.S.; Werner, Rodrigo; Antunes, Valdir; Vellozo, Sergio O.

    2015-01-01

    The well known two energy groups core reactor design model is revisited. A simple and friendly Maple module was built to cover the steps calculations of a plate reactor in five situations: 1. one group bare reactor, 2. two groups bare reactor, 3. one group reflected reactor, 4. 1-1/2 groups reflected reactor and 5. two groups reflected reactor. The results show the convergent path of critical size, as it should be. (author)

  19. A friendly Maple module for one and two group reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Camila O.; Pavan, Guilherme A.; Braga, Kelmo L.; Silva, Marcelo V.; Pereira, P.G.S.; Werner, Rodrigo; Antunes, Valdir; Vellozo, Sergio O., E-mail: camila.oliv.baptista@gmail.com, E-mail: pavanguilherme@gmail.com, E-mail: kelmo.lins@gmail.com, E-mail: marcelovilelasilva@gmail.com, E-mail: rodrigowerner@hotmail.com, E-mail: neutron201566@yahoo.com, E-mail: vellozo@ime.eb.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    The well known two energy groups core reactor design model is revisited. A simple and friendly Maple module was built to cover the steps calculations of a plate reactor in five situations: 1. one group bare reactor, 2. two groups bare reactor, 3. one group reflected reactor, 4. 1-1/2 groups reflected reactor and 5. two groups reflected reactor. The results show the convergent path of critical size, as it should be. (author)

  20. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    Science.gov (United States)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  1. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    Science.gov (United States)

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  2. First Very Large Telescope/X-shooter spectroscopy of early-type stars outside the Local Group

    NARCIS (Netherlands)

    Hartoog, O.E.; Sana, H.; de Koter, A.; Kaper, L.

    2012-01-01

    As part of the Very Large Telescope (VLT)/X-shooter science verification, we obtained the first optical medium-resolution spectrum of a previously identified bright O-type object in NGC 55, a Large Magellanic Cloud (LMC)-like galaxy at a distance of ∼2.0 Mpc. Based on the stellar and nebular

  3. The Effects of Individual versus Group Incentive Systems on Student Learning and Attitudes in a Large Lecture Course

    Science.gov (United States)

    Shariff, Sya Azmeela Binti

    2012-01-01

    Promoting active learning among students may result in greater learning and more positive attitudes in university-level large lecture classes. One way of promoting active learning in large lecture classes is via the use of a think-pair-share instructional strategy, which combines student participation in class discussions via clicker technology…

  4. Large-group psychodynamics and massive violence Psicodinâmica da violência de grandes grupos e da violência de massas

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-06-01

    Full Text Available Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This chapter examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression. When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.A partir de Freud, as teorias psicanalíticas a respeito de grandes grupos focalizam principalmente as percepções e os significados que os indivíduos psicologicamente atribuem a eles. Este texto analisa alguns aspectos sobre a psicologia dos grandes grupos e sua psicodinâmica interna e específica. Toma como referência grupos étnicos, nacionais, religiosos e ideológicos cujo pertencimento dos sujeitos iniciou-se na infância. Faz-se uma comparação entre o processo de luto em indivíduos e o processo de luto em grandes grupos para ilustrar por que é necessário investir no conhecimento da psicologia destes últimos, como um objeto específico. Descreve ainda sinais e sintomas de regressão em grandes grupos. Quando há ameaça à identidade coletiva pode ocorrer um processo de violência de massas que obviamente influencia na sua saúde coletiva.

  5. Psicodinâmica da violência de grandes grupos e da violência de massas Large-group psychodynamics and massive violence

    Directory of Open Access Journals (Sweden)

    Vamik D. Volkan

    2006-01-01

    Full Text Available A partir de Freud, as teorias psicanalistas sobre grandes grupos focalizam, principalmente, as percepções e os significados que, psicologicamente, os indivíduos atribuem a eles. Este texto analisa alguns aspectos sobre a psicologia dos grandes grupos e sua psicodinâmica interna e específica. Toma como referência grupos étnicos, nacionais, religiosos e ideológicos cujo pertencimento dos sujeitos iniciou-se na infância. O autor faz uma comparação entre o processo de luto em indivíduos e o processo de luto em grandes grupos para ilustrar por que é necessário investir no conhecimento da psicologia destes últimos como um objeto específico. O autor descreve, ainda, sinais e sintomas de regressão em grandes grupos. Quando há ameaça à identidade coletiva, pode ocorrer um processo de violência de massas que obviamente influencia a saúde pública.Beginning with Freud, psychoanalytic theories concerning large groups have mainly focused on individuals' perceptions of what their large groups psychologically mean to them. This text examines some aspects of large-group psychology in its own right and studies psychodynamics of ethnic, national, religious or ideological groups, the membership of which originates in childhood. I will compare the mourning process in individuals with the mourning process in large groups to illustrate why we need to study large-group psychology as a subject in itself. As part of this discussion I will also describe signs and symptoms of large-group regression.When there is a threat against a large-group's identity, massive violence may be initiated and this violence in turn, has an obvious impact on public health.

  6. The Effects of Group Relaxation Training/Large Muscle Exercise, and Parental Involvement on Attention to Task, Impulsivity, and Locus of Control among Hyperactive Boys.

    Science.gov (United States)

    Porter, Sally S.; Omizo, Michael M.

    1984-01-01

    The study examined the effects of group relaxation training/large muscle exercise and parental involvement on attention to task, impulsivity, and locus of control among 34 hyperactive boys. Following treatment both experimental groups recorded significantly higher attention to task, lower impulsivity, and lower locus of control scores. (Author/CL)

  7. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  8. A wave propagation model of blood flow in large vessels using an approximate velocity profile function

    NARCIS (Netherlands)

    Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.

    2007-01-01

    Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological

  9. A Model for Behavioral Management and Relationship Training for Parents in Groups,

    Science.gov (United States)

    Behavior, Human relations, *Training, *Families(Human), Symposia, Models, Children, Psychotherapy, Problem solving, Management, Control, Learning, Skills, Decision making , Group dynamics, Military psychology, Military medicine

  10. Multiconformation, Density Functional Theory-Based pKa Prediction in Application to Large, Flexible Organic Molecules with Diverse Functional Groups.

    Science.gov (United States)

    Bochevarov, Art D; Watson, Mark A; Greenwood, Jeremy R; Philipp, Dean M

    2016-12-13

    We consider the conformational flexibility of molecules and its implications for micro- and macro-pK a . The corresponding formulas are derived and discussed against the background of a comprehensive scientific and algorithmic description of the latest version of our computer program Jaguar pK a , a density functional theory-based pK a predictor, which is now capable of acting on multiple conformations explicitly. Jaguar pK a is essentially a complex computational workflow incorporating research and technologies from the fields of cheminformatics, molecular mechanics, quantum mechanics, and implicit solvation models. The workflow also makes use of automatically applied empirical corrections which account for the systematic errors resulting from the neglect of explicit solvent interactions in the algorithm's implicit solvent model. Applications of our program to large, flexible organic molecules representing several classes of functional groups are shown, with a particular emphasis in illustrations laid on drug-like molecules. It is demonstrated that a combination of aggressive conformational search and an explicit consideration of multiple conformations nearly eliminates the dependence of results on the initially chosen conformation. In certain cases this leads to unprecedented accuracy, which is sufficient for distinguishing stereoisomers that have slightly different pK a values. An application of Jaguar pK a to proton sponges, the pK a of which are strongly influenced by steric effects, showcases the advantages that pK a predictors based on quantum mechanical calculations have over similar empirical programs.

  11. Task complexity and task, goal, and reward interdependence in group performance management : A prescriptive model

    NARCIS (Netherlands)

    van Vijfeijken, H.; Kleingeld, A.; van Tuijl, H.; Algera, J.A.; Thierry, Hk.

    2002-01-01

    A prescriptive model on how to design effective combinations of goal setting and contingent rewards for group performance management is presented. The model incorporates the constructs task complexity, task interdependence, goal interdependence, and reward interdependence and specifies optimal fit

  12. Task complexity and task, goal, and reward interdependence in group performance : a prescriptive model

    NARCIS (Netherlands)

    Vijfeijken, van H.T.G.A.; Kleingeld, P.A.M.; Tuijl, van H.F.J.M.; Algera, J.A.; Thierry, H.

    2002-01-01

    A prescriptive model on how to design effective combinations of goal setting and contingent rewards for group performance management is presented. The model incorporates the constructs task complexity, task interdependence, goal interdependence, and reward interdependence and specifies optimal fit

  13. Does company size matter? Validation of an integrative model of safety behavior across small and large construction companies.

    Science.gov (United States)

    Guo, Brian H W; Yiu, Tak Wing; González, Vicente A

    2018-02-01

    Previous safety climate studies primarily focused on either large construction companies or the construction industry as a whole, while little is known about whether company size has significant effects on workers' understanding of safety climate measures and relationships between safety climate factors and safety behavior. Thus, this study aims to: (a) test the measurement equivalence (ME) of a safety climate measure across workers from small and large companies; (b) investigate if company size alters the causal structure of the integrative model developed by Guo, Yiu, and González (2016). Data were collected from 253 construction workers in New Zealand using a safety climate measure. This study used multi-group confirmatory factor analyses (MCFA) to test the measurement equivalence of the safety climate measure and structure invariance of the integrative model. Results indicate that workers from small and large companies understood the safety climate measure in a similar manner. In addition, it was suggested that company size does not change the causal structure and mediational processes of the integrative model. Both measurement equivalence of the safety climate measure and structural invariance of the integrative model were supported by this study. Practical applications: Findings of this study provided strong support for a meaningful use of the safety climate measure across construction companies in different sizes. Safety behavior promotion strategies designed based on the integrative model may be well suited for both large and small companies. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  14. Comparison of hard scattering models for particle production at large transverse momentum. 2

    International Nuclear Information System (INIS)

    Schiller, A.; Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.; Ranft, G.; Ranft, J.

    1977-01-01

    Single particle distributions of π + and π - at large transverse momentum are analysed using various hard collision models: qq → qq, qantiq → MantiM, qM → qM. The transverse momentum dependence at thetasub(cm) = 90 0 is well described in all models except qantiq → MantiM. This model has problems with the ratios (pp → π + +X)/(π +- p → π 0 +X). Presently available data on rapidity distributions of pions in π - p and pantip collisions are at rather low transverse momentum (however large xsub(perpendicular) = 2psub(perpendicular)/√s) where it is not obvious that hard collision models should dominate. The data, in particular the π - /π + asymmetry are well described by all models except qM → Mq (CIM). At large values of transverse momentum significant differences between the models are predicted. (author)

  15. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  16. Introducing the fit-criteria assessment plot - A visualisation tool to assist class enumeration in group-based trajectory modelling.

    Science.gov (United States)

    Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria

    2017-10-01

    Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.

  17. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  18. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  19. Critical behavior in dome D = 1 large-N matrix models

    International Nuclear Information System (INIS)

    Das, S.R.; Dhar, A.; Sengupta, A.M.; Wadia, D.R.

    1990-01-01

    The authors study the critical behavior in D = 1 large-N matrix models. The authors also look at the subleading terms in susceptibility in order to find out the dimensions of some of the operators in the theory

  20. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...