WorldWideScience

Sample records for sophisticated multi-model combination

  1. Sophisticated Players and Sophisticated Agents

    NARCIS (Netherlands)

    Rustichini, A.

    1998-01-01

    A sophisticated player is an individual who takes the action of the opponents, in a strategic situation, as determined by decision of rational opponents, and acts accordingly. A sophisticated agent is rational in the choice of his action, but ignores the fact that he is part of a strategic

  2. Multi-model ensemble combinations of the water budget in the East/Japan Sea

    Science.gov (United States)

    HAN, S.; Hirose, N.; Usui, N.; Miyazawa, Y.

    2016-02-01

    The water balance of East/Japan Sea is determined mainly by inflow and outflow through the Korea/Tsushima, Tsugaru and Soya/La Perouse Straits. However, the volume transports measured at three straits remain quantitatively unbalanced. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate physically consistent circulation in East/Japan Sea by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.42 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.

  3. Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

    Directory of Open Access Journals (Sweden)

    A. Riccio

    2007-12-01

    Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.

    We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.

    This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  4. In Praise of the Sophists.

    Science.gov (United States)

    Gibson, Walker

    1993-01-01

    Discusses the thinking of the Greek Sophist philosophers, particularly Gorgias and Protagoras, and their importance and relevance for contemporary English instructors. Considers the problem of language as signs of reality in the context of Sophist philosophy. (HB)

  5. Cumulative Dominance and Probabilistic Sophistication

    NARCIS (Netherlands)

    Wakker, P.P.; Sarin, R.H.

    2000-01-01

    Machina & Schmeidler (Econometrica, 60, 1992) gave preference conditions for probabilistic sophistication, i.e. decision making where uncertainty can be expressed in terms of (subjective) probabilities without commitment to expected utility maximization. This note shows that simpler and more general

  6. Cognitive Load and Strategic Sophistication

    OpenAIRE

    Allred, Sarah; Duffy, Sean; Smith, John

    2013-01-01

    We study the relationship between the cognitive load manipulation and strategic sophistication. The cognitive load manipulation is designed to reduce the subject's cognitive resources that are available for deliberation on a choice. In our experiment, subjects are placed under a large cognitive load (given a difficult number to remember) or a low cognitive load (given a number which is not difficult to remember). Subsequently, the subjects play a one-shot game then they are asked to recall...

  7. Sophisticating a naive Liapunov function

    International Nuclear Information System (INIS)

    Smith, D.; Lewins, J.D.

    1985-01-01

    The art of the direct method of Liapunov to determine system stability is to construct a suitable Liapunov or V function where V is to be positive definite (PD), to shrink to a center, which may be conveniently chosen as the origin, and where V is the negative definite (ND). One aid to the art is to solve an approximation to the system equations in order to provide a candidate V function. It can happen, however, that the V function is not strictly ND but vanishes at a finite number of isolated points. Naively, one anticipates that stability has been demonstrated since the trajectory of the system at such points is only momentarily tangential and immediately enters a region of inward directed trajectories. To demonstrate stability rigorously requires the construction of a sophisticated Liapunov function from what can be called the naive original choice. In this paper, the authors demonstrate the method of perturbing the naive function in the context of the well-known second-order oscillator and then apply the method to a more complicated problem based on a prompt jump model for a nuclear fission reactor

  8. Pension fund sophistication and investment policy

    NARCIS (Netherlands)

    de Dreu, J.|info:eu-repo/dai/nl/364537906; Bikker, J.A.|info:eu-repo/dai/nl/06912261X

    This paper assesses the sophistication of pension funds’ investment policies using data on 748 Dutch pension funds during the 1999–2006 period. We develop three indicators of sophistication: gross rounding of investment choices, investments in alternative sophisticated asset classes and ‘home bias’.

  9. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  10. Improving Streamflow Simulation in Gaged and Ungaged Areas Using a Multi-Model Synthesis Combined with Remotely-Sensed Data and Estimates of Uncertainty

    Science.gov (United States)

    Lafontaine, J.; Hay, L.

    2015-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). More than 1,700 gaged watersheds across the CONUS were modeled to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models with remotely-sensed data products (i.e. - snow water equivalent) and estimates of uncertainty. Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison. As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. - snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve simulations of streamflow for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of simulated and measured information for model development and calibration at a given location of interest. In addition, these calibration strategies have been developed to be flexible so that new data products or simulated information can be assimilated. This analysis provides a foundation to understand how well models work when streamflow data is either not available or is limited and could be used to further inform hydrologic model parameter development for ungaged areas.

  11. Financial Literacy and Financial Sophistication in the Older Population

    Science.gov (United States)

    Lusardi, Annamaria; Mitchell, Olivia S.; Curto, Vilsa

    2017-01-01

    Using a special-purpose module implemented in the Health and Retirement Study, we evaluate financial sophistication in the American population over the age of 50. We combine several financial literacy questions into an overall index to highlight which questions best capture financial sophistication and examine the sensitivity of financial literacy responses to framing effects. Results show that many older respondents are not financially sophisticated: they fail to grasp essential aspects of risk diversification, asset valuation, portfolio choice, and investment fees. Subgroups with notable deficits include women, the least educated, non-Whites, and those over age 75. In view of the fact that retirees increasingly must take on responsibility for their own retirement security, such meager levels of knowledge have potentially serious and negative implications. PMID:28553191

  12. Financial Literacy and Financial Sophistication in the Older Population.

    Science.gov (United States)

    Lusardi, Annamaria; Mitchell, Olivia S; Curto, Vilsa

    2014-10-01

    Using a special-purpose module implemented in the Health and Retirement Study, we evaluate financial sophistication in the American population over the age of 50. We combine several financial literacy questions into an overall index to highlight which questions best capture financial sophistication and examine the sensitivity of financial literacy responses to framing effects. Results show that many older respondents are not financially sophisticated: they fail to grasp essential aspects of risk diversification, asset valuation, portfolio choice, and investment fees. Subgroups with notable deficits include women, the least educated, non-Whites, and those over age 75. In view of the fact that retirees increasingly must take on responsibility for their own retirement security, such meager levels of knowledge have potentially serious and negative implications.

  13. The value of multivariate model sophistication

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars; Violante, Francesco

    2014-01-01

    We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their spec....... In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.......We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ...

  14. Does underground storage still require sophisticated studies?

    International Nuclear Information System (INIS)

    Marsily, G. de

    1997-01-01

    Most countries agree to the necessity of burying high or medium-level wastes in geological layers situated at a few hundred meters below the ground level. The advantages and disadvantages of different types of rock such as salt, clay, granite and volcanic material are examined. Sophisticated studies are lead to determine the best geological confinement but questions arise about the time for which safety must be ensured. France has chosen 3 possible sites. These sites are geologically described in the article. The final place will be proposed after a testing phase of about 5 years in an underground facility. (A.C.)

  15. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  16. Does Investors' Sophistication Affect Persistence and Pricing of Discretionary Accruals?

    OpenAIRE

    Lanfeng Kao

    2007-01-01

    This paper examines whether the sophistication of market investors influences management's strategy on discretionary accounting choice, and thus changes the persistence of discretionary accruals. The results show that the persistence of discretionary accruals for firms face with naive investors is lower than that for firms face with sophisticated investors. The results also demonstrate that sophisticated investors indeed incorporate the implications of current earnings components into future ...

  17. Roman sophisticated surface modification methods to manufacture silver counterfeited coins

    Science.gov (United States)

    Ingo, G. M.; Riccucci, C.; Faraldi, F.; Pascucci, M.; Messina, E.; Fierro, G.; Di Carlo, G.

    2017-11-01

    By means of the combined use of X-ray photoelectron spectroscopy (XPS), optical microscopy (OM) and scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) the surface and subsurface chemical and metallurgical features of silver counterfeited Roman Republican coins are investigated to decipher some aspects of the manufacturing methods and to evaluate the technological ability of the Roman metallurgists to produce thin silver coatings. The results demonstrate that over 2000 ago important advances in the technology of thin layer deposition on metal substrates were attained by Romans. The ancient metallurgists produced counterfeited coins by combining sophisticated micro-plating methods and tailored surface chemical modification based on the mercury-silvering process. The results reveal that Romans were able systematically to chemically and metallurgically manipulate alloys at a micro scale to produce adherent precious metal layers with a uniform thickness up to few micrometers. The results converge to reveal that the production of forgeries was aimed firstly to save expensive metals as much as possible allowing profitable large-scale production at a lower cost. The driving forces could have been a lack of precious metals, an unexpected need to circulate coins for trade and/or a combinations of social, political and economic factors that requested a change in money supply. Finally, some information on corrosion products have been achieved useful to select materials and methods for the conservation of these important witnesses of technology and economy.

  18. Multi-Model Adaptive Fuzzy Controller for a CSTR Process

    Directory of Open Access Journals (Sweden)

    Shubham Gogoria

    2015-09-01

    Full Text Available Continuous Stirred Tank Reactors are intensively used to control exothermic reactions in chemical industries. It is a very complex multi-variable system with non-linear characteristics. This paper deals with linearization of the mathematical model of a CSTR Process. Multi model adaptive fuzzy controller has been designed to control the reactor concentration and temperature of CSTR process. This method combines the output of multiple Fuzzy controllers, which are operated at various operating points. The proposed solution is a straightforward implementation of Fuzzy controller with gain scheduler to control the linearly inseparable parameters of a highly non-linear process.

  19. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  20. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

    Science.gov (United States)

    Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

    2011-01-01

    The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

  1. The role of sophisticated accounting system in strategy management

    OpenAIRE

    Naranjo Gil, David

    2004-01-01

    Organizations are designing more sophisticated accounting information systems to meet the strategic goals and enhance their performance. This study examines the effect of accounting information system design on the performance of organizations pursuing different strategic priorities. The alignment between sophisticated accounting information systems and organizational strategy is analyzed. The enabling effect of the accounting information system on performance is also examined. Relationships ...

  2. Probabilistic Sophistication, Second Order Stochastic Dominance, and Uncertainty Aversion

    OpenAIRE

    Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci; Luigi Montrucchio

    2010-01-01

    We study the interplay of probabilistic sophistication, second order stochastic dominance, and uncertainty aversion, three fundamental notions in choice under uncertainty. In particular, our main result, Theorem 2, characterizes uncertainty averse preferences that satisfy second order stochastic dominance, as well as uncertainty averse preferences that are probabilistically sophisticated.

  3. The First Sophists and the Uses of History.

    Science.gov (United States)

    Jarratt, Susan C.

    1987-01-01

    Reviews the history of intellectual views on the Greek sophists in three phases: (1) their disparagement by Plato and Aristotle as the morally disgraceful "other"; (2) nineteenth century British positivists' reappraisal of these relativists as ethically and scientifically superior; and (3) twentieth century versions of the sophists as…

  4. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  5. PAUL AND SOPHISTIC RHETORIC: A PERSPECTIVE ON HIS ...

    African Journals Online (AJOL)

    use of modern rhetorical theories but analyses the letter in terms of the clas- ..... If a critical reader would have had the traditional anti-sophistic arsenal ..... pressions and that 'rhetoric' is mainly a matter of communicating these thoughts.

  6. Sophistication and Performance of Italian Agri‐food Exports

    Directory of Open Access Journals (Sweden)

    Anna Carbone

    2012-06-01

    Full Text Available Nonprice competition is increasingly important in world food markets. Recently, the expression ‘export sophistication’ has been introduced in the economic literature to refer to a wide set of attributes that increase product value. An index has been proposed to measure sophistication in an indirect way through the per capita GDP of exporting countries (Lall et al., 2006; Haussmann et al., 2007.The paper applies the sophistication measure to the Italian food export sector, moving from an analysis of trends and performance of Italian food exports. An original way to disentangle different components in the temporal variation of the sophistication index is also proposed.Results show that the sophistication index offers original insights on recent trends in world food exports and with respect to Italian core food exports.

  7. Obfuscation, Learning, and the Evolution of Investor Sophistication

    OpenAIRE

    Bruce Ian Carlin; Gustavo Manso

    2011-01-01

    Investor sophistication has lagged behind the growing complexity of retail financial markets. To explore this, we develop a dynamic model to study the interaction between obfuscation and investor sophistication in mutual fund markets. Taking into account different learning mechanisms within the investor population, we characterize the optimal timing of obfuscation for financial institutions who offer retail products. We show that educational initiatives that are directed to facilitate learnin...

  8. The conceptualization and measurement of cognitive health sophistication.

    Science.gov (United States)

    Bodie, Graham D; Collins, William B; Jensen, Jakob D; Davis, Lashara A; Guntzviller, Lisa M; King, Andy J

    2013-01-01

    This article develops a conceptualization and measure of cognitive health sophistication--the complexity of an individual's conceptual knowledge about health. Study 1 provides initial validity evidence for the measure--the Healthy-Unhealthy Other Instrument--by showing its association with other cognitive health constructs indicative of higher health sophistication. Study 2 presents data from a sample of low-income adults to provide evidence that the measure does not depend heavily on health-related vocabulary or ethnicity. Results from both studies suggest that the Healthy-Unhealthy Other Instrument can be used to capture variability in the sophistication or complexity of an individual's health-related schematic structures on the basis of responses to two simple open-ended questions. Methodological advantages of the Healthy-Unhealthy Other Instrument and suggestions for future research are highlighted in the discussion.

  9. Finding the Fabulous Few: Why Your Program Needs Sophisticated Research.

    Science.gov (United States)

    Pfizenmaier, Emily

    1981-01-01

    Fund raising, it is argued, needs sophisticated prospect research. Professional prospect researchers play an important role in helping to identify prospective donors and also in helping to stimulate interest in gift giving. A sample of an individual work-up on a donor and a bibliography are provided. (MLW)

  10. Procles the Carthaginian: A North African Sophist in Pausanias’ Periegesis

    Directory of Open Access Journals (Sweden)

    Juan Pablo Sánchez Hernández

    2010-11-01

    Full Text Available Procles, cited by Pausanias (in the imperfect tense about a display in Rome and for an opinion about Pyrrhus of Epirus, probably was not a historian of Hellenistic date, but a contemporary sophist whom Pausanias encountered in person in Rome.

  11. SMEs and new ventures need business model sophistication

    DEFF Research Database (Denmark)

    Kesting, Peter; Günzel-Jensen, Franziska

    2015-01-01

    , and Spreadshirt, this article develops a framework that introduces five business model sophistication strategies: (1) uncover additional functions of your product, (2) identify strategic benefits for third parties, (3) take advantage of economies of scope, (4) utilize cross-selling opportunities, and (5) involve...

  12. Development Strategies for Tourism Destinations: Tourism Sophistication vs. Resource Investments

    OpenAIRE

    Rainer Andergassen; Guido Candela

    2010-01-01

    This paper investigates the effectiveness of development strategies for tourism destinations. We argue that resource investments unambiguously increase tourism revenues and that increasing the degree of tourism sophistication, that is increasing the variety of tourism related goods and services, increases tourism activity and decreases the perceived quality of the destination's resource endowment, leading to an ambiguous effect on tourism revenues. We disentangle these two effects and charact...

  13. Sophisticated Search Capabilities in the ADS Abstract Service

    Science.gov (United States)

    Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Henneken, E.; Kurtz, M. J.; Murray, S. S.

    2003-12-01

    The ADS provides access to over 940,000 references from astronomy and planetary sciences publications and 1.5 million records from physics publications. It is funded by NASA and provides free access to these references, as well as to 2.4 million scanned pages from the astronomical literature. These include most of the major astronomy and several planetary sciences journals, as well as many historical observatory publications. The references now include the abstracts from all volumes of the Journal of Geophysical Research (JGR) since the beginning of 2002. We get these abstracts on a regular basis. The Kluwer journal Solar Physics has been scanned back to volume 1 and is available through the ADS. We have extracted the reference lists from this and many other journals and included them in the reference and citation database of the ADS. We have recently scanning Earth, Moon and Planets, another Kluwer journal, and will scan other Kluwer journals in the future as well. We plan on extracting references from these journals as well in the near future. The ADS has many sophisticated query features. These allow the user to formulate complex queries. Using results lists to get further information about the selected articles provide the means to quickly find important and relevant articles from the database. Three advanced feedback queries are available from the bottom of the ADS results list (in addition to regular feedback queries already available from the abstract page and from the bottom of the results list): 1. Get reference list for selected articles: This query returns all known references for the selected articles (or for all articles in the first list). The resulting list will be ranked according to how often each article is referred to and will show the most referenced articles in the field of study that created the first list. It presumably shows the most important articles in that field. 2. Get citation list for selected articles: This returns all known articles

  14. Multi-objective optimization for generating a weighted multi-model ensemble

    Science.gov (United States)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic

  15. Merging information from multi-model flood projections in a hierarchical Bayesian framework

    Science.gov (United States)

    Le Vine, Nataliya

    2016-04-01

    Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  16. The New Toxicology of Sophisticated Materials: Nanotoxicology and Beyond

    Science.gov (United States)

    Maynard, Andrew D.; Warheit, David B.; Philbert, Martin A.

    2011-01-01

    It has long been recognized that the physical form of materials can mediate their toxicity—the health impacts of asbestiform materials, industrial aerosols, and ambient particulate matter are prime examples. Yet over the past 20 years, toxicology research has suggested complex and previously unrecognized associations between material physicochemistry at the nanoscale and biological interactions. With the rapid rise of the field of nanotechnology and the design and production of increasingly complex nanoscale materials, it has become ever more important to understand how the physical form and chemical composition of these materials interact synergistically to determine toxicity. As a result, a new field of research has emerged—nanotoxicology. Research within this field is highlighting the importance of material physicochemical properties in how dose is understood, how materials are characterized in a manner that enables quantitative data interpretation and comparison, and how materials move within, interact with, and are transformed by biological systems. Yet many of the substances that are the focus of current nanotoxicology studies are relatively simple materials that are at the vanguard of a new era of complex materials. Over the next 50 years, there will be a need to understand the toxicology of increasingly sophisticated materials that exhibit novel, dynamic and multifaceted functionality. If the toxicology community is to meet the challenge of ensuring the safe use of this new generation of substances, it will need to move beyond “nano” toxicology and toward a new toxicology of sophisticated materials. Here, we present a brief overview of the current state of the science on the toxicology of nanoscale materials and focus on three emerging toxicology-based challenges presented by sophisticated materials that will become increasingly important over the next 50 years: identifying relevant materials for study, physicochemical characterization, and

  17. Strategic sophistication of individuals and teams. Experimental evidence

    Science.gov (United States)

    Sutter, Matthias; Czermak, Simon; Feri, Francesco

    2013-01-01

    Many important decisions require strategic sophistication. We examine experimentally whether teams act more strategically than individuals. We let individuals and teams make choices in simple games, and also elicit first- and second-order beliefs. We find that teams play the Nash equilibrium strategy significantly more often, and their choices are more often a best response to stated first order beliefs. Distributional preferences make equilibrium play less likely. Using a mixture model, the estimated probability to play strategically is 62% for teams, but only 40% for individuals. A model of noisy introspection reveals that teams differ from individuals in higher order beliefs. PMID:24926100

  18. Few remarks on chiral theories with sophisticated topology

    International Nuclear Information System (INIS)

    Golo, V.L.; Perelomov, A.M.

    1978-01-01

    Two classes of the two-dimensional Euclidean chiral field theoreties are singled out: 1) the field phi(x) takes the values in the compact Hermitiam symmetric space 2) the field phi(x) takes the values in an orbit of the adjoint representation of the comcompact Lie group. The theories have sophisticated topological and rich analytical structures. They are considered with the help of topological invariants (topological charges). Explicit formulae for the topological charges are indicated, and the lower bound extimate for the action is given

  19. STOCK EXCHANGE LISTING INDUCES SOPHISTICATION OF CAPITAL BUDGETING

    Directory of Open Access Journals (Sweden)

    Wesley Mendes-da-Silva

    2014-08-01

    Full Text Available This article compares capital budgeting techniques employed in listed and unlisted companies in Brazil. We surveyed the Chief Financial Officers (CFOs of 398 listed companies and 300 large unlisted companies, and based on 91 respondents, the results suggest that the CFOs of listed companies tend to use less simplistic methods more often, for example: NPV and CAPM, and that CFOs of unlisted companies are less likely to estimate the cost of equity, despite being large companies. These findings indicate that stock exchange listing may require greater sophistication of the capital budgeting process.

  20. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  1. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  2. The sophisticated control of the tram bogie on track

    Directory of Open Access Journals (Sweden)

    Radovan DOLECEK

    2015-09-01

    Full Text Available The paper deals with the problems of routing control algorithms of new conception of tram vehicle bogie. The main goal of these research activities is wear reduction of rail wheels and tracks, wear reduction of traction energy losses and increasing of running comfort. The testing experimental tram vehicle with special bogie construction powered by traction battery is utilized for these purposes. This vehicle has a rotary bogie with independent rotating wheels driven by permanent magnets synchronous motors and a solid axle. The wheel forces in bogie are measured by large amounts of the various sensors placed on the testing experimental tram vehicle. Nowadays the designed control algorithms are implemented to the vehicle superset control system. The traction requirements and track characteristics have an effect to these control algorithms. This control including sophisticated routing brings other improvements which is verified and corrected according to individual traction and driving characteristics, and opens new possibilities.

  3. Sophisticated Approval Voting, Ignorance Priors, and Plurality Heuristics: A Behavioral Social Choice Analysis in a Thurstonian Framework

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R.; Tsetlin, Ilia

    2007-01-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two…

  4. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  5. Robust multi-model predictive control of multi-zone thermal plate system

    Directory of Open Access Journals (Sweden)

    Poom Jatunitanon

    2018-02-01

    Full Text Available A modern controller was designed by using the mathematical model of a multi–zone thermal plate system. An important requirement for this type of controller is that it must be able to keep the temperature set-point of each thermal zone. The mathematical model used in the design was determined through a system identification process. The results showed that when the operating condition is changed, the performance of the controller may be reduced as a result of the system parameter uncertainties. This paper proposes a weighting technique of combining the robust model predictive controller for each operating condition into a single robust multi-model predictive control. Simulation and experimental results showed that the proposed method performed better than the conventional multi-model predictive control in rise time of transient response, when used in a system designed to work over a wide range of operating conditions.

  6. Systematization and sophistication of a comprehensive sensitivity analysis program. Phase 2

    International Nuclear Information System (INIS)

    Oyamada, Kiyoshi; Ikeda, Takao

    2004-02-01

    This study developed minute estimation by adopting comprehensive sensitivity analytical program for reliability of TRU waste repository concepts in a crystalline rock condition. We examined each components and groundwater scenario of geological repository and prepared systematic bases to examine the reliability from the point of comprehensiveness. Models and data are sophisticated to examine the reliability. Based on an existing TRU waste repository concepts, effects of parameters to nuclide migration were quantitatively classified. Those parameters, that will be decided quantitatively, are such as site character of natural barrier and design specification of engineered barriers. Considering the feasibility of those figures of specifications, reliability is re-examined on combinations of those parameters within a practical range. Future issues are; Comprehensive representation of hybrid geosphere model including the fractured medium and permeable matrix medium. Sophistication of tools to develop the reliable combinations of parameters. It is significant to continue this study because the disposal concepts and specification of TRU nuclides containing waste on various sites shall be determined rationally and safely through these studies. (author)

  7. Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi.

    Science.gov (United States)

    de Sá, Fábio P; Zina, Juliana; Haddad, Célio F B

    2016-01-01

    Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids), we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity.

  8. Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi.

    Directory of Open Access Journals (Sweden)

    Fábio P de Sá

    Full Text Available Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids, we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity.

  9. Library of sophisticated functions for analysis of nuclear spectra

    Science.gov (United States)

    Morháč, Miroslav; Matoušek, Vladislav

    2009-10-01

    In the paper we present compact library for analysis of nuclear spectra. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. The functions can process one- and two-dimensional spectra. The software described in the paper comprises a number of conventional as well as newly developed methods needed to analyze experimental data. Program summaryProgram title: SpecAnalysLib 1.1 Catalogue identifier: AEDZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 42 154 No. of bytes in distributed program, including test data, etc.: 2 379 437 Distribution format: tar.gz Programming language: C++ Computer: Pentium 3 PC 2.4 GHz or higher, Borland C++ Builder v. 6. A precompiled Windows version is included in the distribution package Operating system: Windows 32 bit versions RAM: 10 MB Word size: 32 bits Classification: 17.6 Nature of problem: The demand for advanced highly effective experimental data analysis functions is enormous. The library package represents one approach to give the physicists the possibility to use the advanced routines simply by calling them from their own programs. SpecAnalysLib is a collection of functions for analysis of one- and two-parameter γ-ray spectra, but they can be used for other types of data as well. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. Solution method: The algorithms of background estimation are based on Sensitive Non-linear Iterative Peak (SNIP) clipping algorithm. The smoothing algorithms are based on the convolution of the original data with several types of filters and algorithms based on discrete

  10. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  11. Multi-model ensembles for assessment of flood losses and associated uncertainty

    Science.gov (United States)

    Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi

    2018-05-01

    Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.

  12. RSYST: From nuclear reactor calculations towards a highly sophisticated scientific software integration environment

    International Nuclear Information System (INIS)

    Noack, M.; Seybold, J.; Ruehle, R.

    1996-01-01

    The software environment RSYST was originally used to solve problems of reactor physics. The consideration of advanced scientific simulation requirements and the strict application of modern software design principles led to a system which is perfectly suitable to solve problems in various complex scientific problem domains. Starting with a review of the early days of RSYST, we describe the straight evolution driven by the need of software environment which combines the advantages of a high-performance database system with the capability to integrate sophisticated scientific technical applications. The RSYST architecture is presented and the data modelling capabilities are described. To demonstrate the powerful possibilities and flexibility of the RSYST environment, we describe a wide range of RSYST applications, e.g., mechanical simulations of multibody systems, which are used in biomechanical research, civil engineering and robotics. In addition, a hypermedia system which is used for scientific technical training and documentation is presented. (orig.) [de

  13. Projected changes in precipitation intensity and frequency over complex topography: a multi-model perspective

    Science.gov (United States)

    Fischer, Andreas; Keller, Denise; Liniger, Mark; Rajczak, Jan; Schär, Christoph; Appenzeller, Christof

    2014-05-01

    Fundamental changes in the hydrological cycle are expected in a future warmer climate. This is of particular relevance for the Alpine region, as a source and reservoir of several major rivers in Europe and being prone to extreme events such as floodings. For this region, climate change assessments based on the ENSEMBLES regional climate models (RCMs) project a significant decrease in summer mean precipitation under the A1B emission scenario by the mid-to-end of this century, while winter mean precipitation is expected to slightly rise. From an impact perspective, projected changes in seasonal means, however, are often insufficient to adequately address the multifaceted challenges of climate change adaptation. In this study, we revisit the full matrix of the ENSEMBLES RCM projections regarding changes in frequency and intensity, precipitation-type (convective versus stratiform) and temporal structure (wet/dry spells and transition probabilities) over Switzerland and surroundings. As proxies for raintype changes, we rely on the model parameterized convective and large-scale precipitation components. Part of the analysis involves a Bayesian multi-model combination algorithm to infer changes from the multi-model ensemble. The analysis suggests a summer drying that evolves altitude-specific: over low-land regions it is associated with wet-day frequency decreases of convective and large-scale precipitation, while over elevated regions it is primarily associated with a decline in large-scale precipitation only. As a consequence, almost all the models project an increase in the convective fraction at elevated Alpine altitudes. The decrease in the number of wet days during summer is accompanied by decreases (increases) in multi-day wet (dry) spells. This shift in multi-day episodes also lowers the likelihood of short dry spell occurrence in all of the models. For spring and autumn the combined multi-model projections indicate higher mean precipitation intensity north of the

  14. Multi-model approach to characterize human handwriting motion.

    Science.gov (United States)

    Chihi, I; Abdelkrim, A; Benrejeb, M

    2016-02-01

    This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.

  15. Multi-model-based Access Control in Construction Projects

    Directory of Open Access Journals (Sweden)

    Frank Hilbert

    2012-04-01

    Full Text Available During the execution of large scale construction projects performed by Virtual Organizations (VO, relatively complex technical models have to be exchanged between the VO members. For linking the trade and transfer of these models, a so-called multi-model container format was developed. Considering the different skills and tasks of the involved partners, it is not necessary for them to know all the models in every technical detailing. Furthermore, the model size can lead to a delay in communication. In this paper an approach is presented for defining model cut-outs according to the current project context. Dynamic dependencies to the project context as well as static dependencies on the organizational structure are mapped in a context-sensitive rule. As a result, an approach for dynamic filtering of multi-models is obtained which ensures, together with a filtering service, that the involved VO members get a simplified view of complex multi-models as well as sufficient permissions depending on their tasks.

  16. Sophistication and integration of plant engineering CAD-CAE systems

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Hanyu, Masaharu; Ota, Yoshimi; Kobayashi, Yasuhiro.

    1995-01-01

    In respective departments in charge of basic planning, design, manufacture, inspection and construction of nuclear power plants, by the positive utilization of CAD/CAE system, efficient workings have been advanced. This time, the plant integrated CAE system wich heightens the function of these individual systems, and can make workings efficient and advanced by unifying and integrating them was developed. This system is composed of the newly developed application system and the data base system which enables the unified management of engineering data and high speed data conversion in addition to the CAD system for three-dimensional plant layout planning. On the basis of the rich experience and the proposal of improvement of designers by the application of the CAD system for three-dimensional plant layout planning to actual machines, the automation, speed increase and the visualization of input and output by graphical user interface (GUI) in the processing of respective applications were made feasible. As the advancement of plant CAE system, scenic engineering system, integrated layout CAE system, electric instrumentation design CAE system and construction planning CAE system are described. As for the integration of plant CAE systems, the integrated engineering data base, the combination of plant CAE systems, and the operation management in the dispersed environment of networks are reported. At present, Hitachi Ltd. exerts efforts for the construction of atomic energy product in formation integrated management system as the second stage of integration. (K.I.)

  17. Lexical Sophistication as a Multidimensional Phenomenon: Relations to Second Language Lexical Proficiency, Development, and Writing Quality

    Science.gov (United States)

    Kim, Minkyung; Crossley, Scott A.; Kyle, Kristopher

    2018-01-01

    This study conceptualizes lexical sophistication as a multidimensional phenomenon by reducing numerous lexical features of lexical sophistication into 12 aggregated components (i.e., dimensions) via a principal component analysis approach. These components were then used to predict second language (L2) writing proficiency levels, holistic lexical…

  18. The predictors of economic sophistication: media, interpersonal communication and negative economic experiences

    NARCIS (Netherlands)

    Kalogeropoulos, A.; Albæk, E.; de Vreese, C.H.; van Dalen, A.

    2015-01-01

    In analogy to political sophistication, it is imperative that citizens have a certain level of economic sophistication, especially in times of heated debates about the economy. This study examines the impact of different influences (media, interpersonal communication and personal experiences) on

  19. Isocratean Discourse Theory and Neo-Sophistic Pedagogy: Implications for the Composition Classroom.

    Science.gov (United States)

    Blair, Kristine L.

    With the recent interest in the fifth century B.C. theories of Protagoras and Gorgias come assumptions about the philosophical affinity of the Greek educator Isocrates to this pair of older sophists. Isocratean education in discourse, with its emphasis on collaborative political discourse, falls within recent definitions of a sophist curriculum.…

  20. Aristotle and Social-Epistemic Rhetoric: The Systematizing of the Sophistic Legacy.

    Science.gov (United States)

    Allen, James E.

    While Aristotle's philosophical views are more foundational than those of many of the Older Sophists, Aristotle's rhetorical theories inherit and incorporate many of the central tenets ascribed to Sophistic rhetoric, albeit in a more systematic fashion, as represented in the "Rhetoric." However, Aristotle was more than just a rhetorical…

  1. Refining multi-model projections of temperature extremes by evaluation against land-atmosphere coupling diagnostics

    Science.gov (United States)

    Sippel, Sebastian; Zscheischler, Jakob; Mahecha, Miguel D.; Orth, Rene; Reichstein, Markus; Vogel, Martha; Seneviratne, Sonia I.

    2017-05-01

    The Earth's land surface and the atmosphere are strongly interlinked through the exchange of energy and matter. This coupled behaviour causes various land-atmosphere feedbacks, and an insufficient understanding of these feedbacks contributes to uncertain global climate model projections. For example, a crucial role of the land surface in exacerbating summer heat waves in midlatitude regions has been identified empirically for high-impact heat waves, but individual climate models differ widely in their respective representation of land-atmosphere coupling. Here, we compile an ensemble of 54 combinations of observations-based temperature (T) and evapotranspiration (ET) benchmarking datasets and investigate coincidences of T anomalies with ET anomalies as a proxy for land-atmosphere interactions during periods of anomalously warm temperatures. First, we demonstrate that a large fraction of state-of-the-art climate models from the Coupled Model Intercomparison Project (CMIP5) archive produces systematically too frequent coincidences of high T anomalies with negative ET anomalies in midlatitude regions during the warm season and in several tropical regions year-round. These coincidences (high T, low ET) are closely related to the representation of temperature variability and extremes across the multi-model ensemble. Second, we derive a land-coupling constraint based on the spread of the T-ET datasets and consequently retain only a subset of CMIP5 models that produce a land-coupling behaviour that is compatible with these benchmark estimates. The constrained multi-model simulations exhibit more realistic temperature extremes of reduced magnitude in present climate in regions where models show substantial spread in T-ET coupling, i.e. biases in the model ensemble are consistently reduced. Also the multi-model simulations for the coming decades display decreased absolute temperature extremes in the constrained ensemble. On the other hand, the differences between projected

  2. Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble

    Science.gov (United States)

    Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin

    2017-04-01

    Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V

  3. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    Science.gov (United States)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.

  4. Refining multi-model projections of temperature extremes by evaluation against land–atmosphere coupling diagnostics

    Directory of Open Access Journals (Sweden)

    S. Sippel

    2017-05-01

    Full Text Available The Earth's land surface and the atmosphere are strongly interlinked through the exchange of energy and matter. This coupled behaviour causes various land–atmosphere feedbacks, and an insufficient understanding of these feedbacks contributes to uncertain global climate model projections. For example, a crucial role of the land surface in exacerbating summer heat waves in midlatitude regions has been identified empirically for high-impact heat waves, but individual climate models differ widely in their respective representation of land–atmosphere coupling. Here, we compile an ensemble of 54 combinations of observations-based temperature (T and evapotranspiration (ET benchmarking datasets and investigate coincidences of T anomalies with ET anomalies as a proxy for land–atmosphere interactions during periods of anomalously warm temperatures. First, we demonstrate that a large fraction of state-of-the-art climate models from the Coupled Model Intercomparison Project (CMIP5 archive produces systematically too frequent coincidences of high T anomalies with negative ET anomalies in midlatitude regions during the warm season and in several tropical regions year-round. These coincidences (high T, low ET are closely related to the representation of temperature variability and extremes across the multi-model ensemble. Second, we derive a land-coupling constraint based on the spread of the T–ET datasets and consequently retain only a subset of CMIP5 models that produce a land-coupling behaviour that is compatible with these benchmark estimates. The constrained multi-model simulations exhibit more realistic temperature extremes of reduced magnitude in present climate in regions where models show substantial spread in T–ET coupling, i.e. biases in the model ensemble are consistently reduced. Also the multi-model simulations for the coming decades display decreased absolute temperature extremes in the constrained ensemble. On the other hand

  5. The Impact of Services on Economic Complexity: Service Sophistication as Route for Economic Growth.

    Science.gov (United States)

    Stojkoski, Viktor; Utkovski, Zoran; Kocarev, Ljupco

    2016-01-01

    Economic complexity reflects the amount of knowledge that is embedded in the productive structure of an economy. By combining tools from network science and econometrics, a robust and stable relationship between a country's productive structure and its economic growth has been established. Here we report that not only goods but also services are important for predicting the rate at which countries will grow. By adopting a terminology which classifies manufactured goods and delivered services as products, we investigate the influence of services on the country's productive structure. In particular, we provide evidence that complexity indices for services are in general higher than those for goods, which is reflected in a general tendency to rank countries with developed service sector higher than countries with economy centred on manufacturing of goods. By focusing on country dynamics based on experimental data, we investigate the impact of services on the economic complexity of countries measured in the product space (consisting of both goods and services). Importantly, we show that diversification of service exports and its sophistication can provide an additional route for economic growth in both developing and developed countries.

  6. Time-dependent evolution of rock slopes by a multi-modelling approach

    Science.gov (United States)

    Bozzano, F.; Della Seta, M.; Martino, S.

    2016-06-01

    This paper presents a multi-modelling approach that incorporates contributions from morpho-evolutionary modelling, detailed engineering-geological modelling and time-dependent stress-strain numerical modelling to analyse the rheological evolution of a river valley slope over approximately 102 kyr. The slope is located in a transient, tectonically active landscape in southwestern Tyrrhenian Calabria (Italy), where gravitational processes drive failures in rock slopes. Constraints on the valley profile development were provided by a morpho-evolutionary model based on the correlation of marine and river strath terraces. Rock mass classes were identified through geomechanical parameters that were derived from engineering-geological surveys and outputs of a multi-sensor slope monitoring system. The rock mass classes were associated to lithotechnical units to obtain a high-resolution engineering-geological model along a cross section of the valley. Time-dependent stress-strain numerical modelling reproduced the main morpho-evolutionary stages of the valley slopes. The findings demonstrate that a complex combination of eustatism, uplift and Mass Rock Creep (MRC) deformations can lead to first-time failures of rock slopes when unstable conditions are encountered up to the generation of stress-controlled shear zones. The multi-modelling approach enabled us to determine that such complex combinations may have been sufficient for the first-time failure of the S. Giovanni slope at approximately 140 ka (MIS 7), even without invoking any trigger. Conversely, further reactivations of the landslide must be related to triggers such as earthquakes, rainfall and anthropogenic activities. This failure involved a portion of the slope where a plasticity zone resulted from mass rock creep that evolved with a maximum strain rate of 40% per thousand years, after the formation of a river strath terrace. This study demonstrates that the multi-modelling approach presented herein is a useful

  7. Credible baseline analysis for multi-model public policy studies

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, S.C.; Gass, S.I.

    1981-01-01

    The nature of public decision-making and resource allocation is such that many complex interactions can best be examined and understood by quantitative analysis. Most organizations do not possess the totality of models and needed analytical skills to perform detailed and systematic quantitative analysis. Hence, the need for coordinated, multi-organization studies that support public decision-making has grown in recent years. This trend is expected not only to continue, but to increase. This paper describes the authors' views on the process of multi-model analysis based on their participation in an analytical exercise, the ORNL/MITRE Study. One of the authors was the exercise coordinator. During the study, the authors were concerned with the issue of measuring and conveying credibility of the analysis. This work led them to identify several key determinants, described in this paper, that could be used to develop a rating of credibility.

  8. Multi-model predictive control method for nuclear steam generator water level

    International Nuclear Information System (INIS)

    Hu Ke; Yuan Jingqi

    2008-01-01

    The dynamics of a nuclear steam generator (SG) is very different according to the power levels and changes as time goes on. Therefore, it is an intractable as well as challenging task to improve the water level control system of the SG. In this paper, a robust model predictive control (RMPC) method is developed for the level control problem. Based on a multi-model framework, a combination of a local nominal model with a polytopic uncertain linear parameter varying (LPV) model is built to approximate the system's non-linear behavior. The optimization problem solved here is based on a receding horizon scheme involving the linear matrix inequality (LMI) technique. Closed loop stability and constraints satisfaction in the entire operating range are guaranteed by the feasibility of the optimization problem. Finally, simulation results show the effectiveness and the good performance of the proposed method

  9. Breast cancer risk in atomic bomb survivors from multi-model inference with incidence data 1958-1998

    International Nuclear Information System (INIS)

    Kaiser, J.C.; Jacob, P.; Meckbach, R.; Cullings, H.M.

    2012-01-01

    Breast cancer risk from radiation exposure has been analyzed in the cohort of Japanese a-bomb survivors using empirical models and mechanistic two-step clonal expansion (TSCE) models with incidence data from 1958 to 1998. TSCE models rely on a phenomenological representation of cell transition processes on the path to cancer. They describe the data as good as empirical models and this fact has been exploited for risk assessment. Adequate models of both types have been selected with a statistical protocol based on parsimonious parameter deployment and their risk estimates have been combined using multi-model inference techniques. TSCE models relate the radiation risk to cell processes which are controlled by age-increasing rates of initiating mutations and by changes in hormone levels due to menopause. For exposure at young age, they predict an enhanced excess relative risk (ERR) whereas the preferred empirical model shows no dependence on age at exposure. At attained age 70, the multi-model median of the ERR at 1 Gy decreases moderately from 1.2 Gy"-"1 (90% CI 0.72; 2.1) for exposure at age 25 to a 30% lower value for exposure at age 55. For cohort strata with few cases, where model predictions diverge, uncertainty intervals from multi-model inference are enhanced by up to a factor of 1.6 compared to the preferred empirical model. Multi-model inference provides a joint risk estimate from several plausible models rather than relying on a single model of choice. It produces more reliable point estimates and improves the characterization of uncertainties. The method is recommended for risk assessment in practical radiation protection. (orig.)

  10. Breast cancer risk in atomic bomb survivors from multi-model inference with incidence data 1958-1998

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, J.C.; Jacob, P.; Meckbach, R. [Institute of Radiation Protection, Helmholtz-Zentrum Muenchen, German Research Centre for Environmental Health, Neuherberg (Germany); Cullings, H.M. [Radiation Effects Research Foundation, Department of Statistics, Hiroshima (Japan)

    2012-03-15

    Breast cancer risk from radiation exposure has been analyzed in the cohort of Japanese a-bomb survivors using empirical models and mechanistic two-step clonal expansion (TSCE) models with incidence data from 1958 to 1998. TSCE models rely on a phenomenological representation of cell transition processes on the path to cancer. They describe the data as good as empirical models and this fact has been exploited for risk assessment. Adequate models of both types have been selected with a statistical protocol based on parsimonious parameter deployment and their risk estimates have been combined using multi-model inference techniques. TSCE models relate the radiation risk to cell processes which are controlled by age-increasing rates of initiating mutations and by changes in hormone levels due to menopause. For exposure at young age, they predict an enhanced excess relative risk (ERR) whereas the preferred empirical model shows no dependence on age at exposure. At attained age 70, the multi-model median of the ERR at 1 Gy decreases moderately from 1.2 Gy{sup -1} (90% CI 0.72; 2.1) for exposure at age 25 to a 30% lower value for exposure at age 55. For cohort strata with few cases, where model predictions diverge, uncertainty intervals from multi-model inference are enhanced by up to a factor of 1.6 compared to the preferred empirical model. Multi-model inference provides a joint risk estimate from several plausible models rather than relying on a single model of choice. It produces more reliable point estimates and improves the characterization of uncertainties. The method is recommended for risk assessment in practical radiation protection. (orig.)

  11. Multi-model Simulation for Optimal Control of Aeroacoustics.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Chen, Guoquan

    2005-05-01

    Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost

  12. The musicality of non-musicians: an index for assessing musical sophistication in the general population.

    Directory of Open Access Journals (Sweden)

    Daniel Müllensiefen

    Full Text Available Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636. Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.

  13. Moral foundations and political attitudes: The moderating role of political sophistication.

    Science.gov (United States)

    Milesi, Patrizia

    2016-08-01

    Political attitudes can be associated with moral concerns. This research investigated whether people's level of political sophistication moderates this association. Based on the Moral Foundations Theory, this article examined whether political sophistication moderates the extent to which reliance on moral foundations, as categories of moral concerns, predicts judgements about policy positions. With this aim, two studies examined four policy positions shown by previous research to be best predicted by the endorsement of Sanctity, that is, the category of moral concerns focused on the preservation of physical and spiritual purity. The results showed that reliance on Sanctity predicted political sophisticates' judgements, as opposed to those of unsophisticates, on policy positions dealing with equal rights for same-sex and unmarried couples and with euthanasia. Political sophistication also interacted with Fairness endorsement, which includes moral concerns for equal treatment of everybody and reciprocity, in predicting judgements about equal rights for unmarried couples, and interacted with reliance on Authority, which includes moral concerns for obedience and respect for traditional authorities, in predicting opposition to stem cell research. Those findings suggest that, at least for these particular issues, endorsement of moral foundations can be associated with political attitudes more strongly among sophisticates than unsophisticates. © 2015 International Union of Psychological Science.

  14. Reading wild minds: A computational assay of Theory of Mind sophistication across seven primate species.

    Directory of Open Access Journals (Sweden)

    Marie Devaine

    2017-11-01

    Full Text Available Theory of Mind (ToM, i.e. the ability to understand others' mental states, endows humans with highly adaptive social skills such as teaching or deceiving. Candidate evolutionary explanations have been proposed for the unique sophistication of human ToM among primates. For example, the Machiavellian intelligence hypothesis states that the increasing complexity of social networks may have induced a demand for sophisticated ToM. This type of scenario ignores neurocognitive constraints that may eventually be crucial limiting factors for ToM evolution. In contradistinction, the cognitive scaffolding hypothesis asserts that a species' opportunity to develop sophisticated ToM is mostly determined by its general cognitive capacity (on which ToM is scaffolded. However, the actual relationships between ToM sophistication and either brain volume (a proxy for general cognitive capacity or social group size (a proxy for social network complexity are unclear. Here, we let 39 individuals sampled from seven non-human primate species (lemurs, macaques, mangabeys, orangutans, gorillas and chimpanzees engage in simple dyadic games against artificial ToM players (via a familiar human caregiver. Using computational analyses of primates' choice sequences, we found that the probability of exhibiting a ToM-compatible learning style is mainly driven by species' brain volume (rather than by social group size. Moreover, primates' social cognitive sophistication culminates in a precursor form of ToM, which still falls short of human fully-developed ToM abilities.

  15. The musicality of non-musicians: an index for assessing musical sophistication in the general population.

    Science.gov (United States)

    Müllensiefen, Daniel; Gingras, Bruno; Musil, Jason; Stewart, Lauren

    2014-01-01

    Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.

  16. Cluster-based analysis of multi-model climate ensembles

    Science.gov (United States)

    Hyde, Richard; Hossaini, Ryan; Leeson, Amber A.

    2018-06-01

    Clustering - the automated grouping of similar data - can provide powerful and unique insight into large and complex data sets, in a fast and computationally efficient manner. While clustering has been used in a variety of fields (from medical image processing to economics), its application within atmospheric science has been fairly limited to date, and the potential benefits of the application of advanced clustering techniques to climate data (both model output and observations) has yet to be fully realised. In this paper, we explore the specific application of clustering to a multi-model climate ensemble. We hypothesise that clustering techniques can provide (a) a flexible, data-driven method of testing model-observation agreement and (b) a mechanism with which to identify model development priorities. We focus our analysis on chemistry-climate model (CCM) output of tropospheric ozone - an important greenhouse gas - from the recent Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Tropospheric column ozone from the ACCMIP ensemble was clustered using the Data Density based Clustering (DDC) algorithm. We find that a multi-model mean (MMM) calculated using members of the most-populous cluster identified at each location offers a reduction of up to ˜ 20 % in the global absolute mean bias between the MMM and an observed satellite-based tropospheric ozone climatology, with respect to a simple, all-model MMM. On a spatial basis, the bias is reduced at ˜ 62 % of all locations, with the largest bias reductions occurring in the Northern Hemisphere - where ozone concentrations are relatively large. However, the bias is unchanged at 9 % of all locations and increases at 29 %, particularly in the Southern Hemisphere. The latter demonstrates that although cluster-based subsampling acts to remove outlier model data, such data may in fact be closer to observed values in some locations. We further demonstrate that clustering can provide a viable and

  17. Clustering of European winter storms: A multi-model perspective

    Science.gov (United States)

    Renggli, Dominik; Buettner, Annemarie; Scherb, Anke; Straub, Daniel; Zimmerli, Peter

    2016-04-01

    The storm series over Europe in 1990 (Daria, Vivian, Wiebke, Herta) and 1999 (Anatol, Lothar, Martin) are very well known. Such clusters of severe events strongly affect the seasonally accumulated damage statistics. The (re)insurance industry has quantified clustering by using distribution assumptions deduced from the historical storm activity of the last 30 to 40 years. The use of storm series simulated by climate models has only started recently. Climate model runs can potentially represent 100s to 1000s of years, allowing a more detailed quantification of clustering than the history of the last few decades. However, it is unknown how sensitive the representation of clustering is to systematic biases. Using a multi-model ensemble allows quantifying that uncertainty. This work uses CMIP5 decadal ensemble hindcasts to study clustering of European winter storms from a multi-model perspective. An objective identification algorithm extracts winter storms (September to April) in the gridded 6-hourly wind data. Since the skill of European storm predictions is very limited on the decadal scale, the different hindcast runs are interpreted as independent realizations. As a consequence, the available hindcast ensemble represents several 1000 simulated storm seasons. The seasonal clustering of winter storms is quantified using the dispersion coefficient. The benchmark for the decadal prediction models is the 20th Century Reanalysis. The decadal prediction models are able to reproduce typical features of the clustering characteristics observed in the reanalysis data. Clustering occurs in all analyzed models over the North Atlantic and European region, in particular over Great Britain and Scandinavia as well as over Iberia (i.e. the exit regions of the North Atlantic storm track). Clustering is generally weaker in the models compared to reanalysis, although the differences between different models are substantial. In contrast to existing studies, clustering is driven by weak

  18. Statistical post-processing of seasonal multi-model forecasts: Why is it so hard to beat the multi-model mean?

    Science.gov (United States)

    Siegert, Stefan

    2017-04-01

    Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.

  19. Shock Absorbers Multi-Modeling and Suspension Optimization

    Directory of Open Access Journals (Sweden)

    LUPU Ciprian

    2013-05-01

    Full Text Available The standard dampers used by more 90% of vehicles have damping coefficients constant along stroke, so they can’t solve simultaneous all of them, situation solving practically using a relative dampingcoefficient able to made compromise between them. This paper design and simulation testing multi-models of two types of Damp (DSA and VZN. To compare the two types of suspension they are simulated in various road and load conditions. Analysis of simulation results is presente a new VZN shock absorber. This is an invention of the Institute of Mechanics of the Romanian Academy, and patented at European and U.S. [1], [2]. This is Called VZN shock absorber, iscoming from Variable Zeta Necessary acronym, for well moving in all road and load Conditions, Where zeta Represents the relative damping, Which is Adjusted automatically, stepwise, According to the piston positions [3,4,5]. Suspension systems are used in all air and ground transportation to protect that building transportation and cargo transported around against shocks and vibrations induced in the systemfrom the road Modifying damping coefficients (Zeta function piston position, being correlated with vehicle load and road unevenness.

  20. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  1. Skill and independence weighting for multi-model assessments

    International Nuclear Information System (INIS)

    Sanderson, Benjamin M.; Wehner, Michael; Knutti, Reto

    2017-01-01

    We present a weighting strategy for use with the CMIP5 multi-model archive in the fourth National Climate Assessment, which considers both skill in the climatological performance of models over North America as well as the inter-dependency of models arising from common parameterizations or tuning practices. The method exploits information relating to the climatological mean state of a number of projection-relevant variables as well as metrics representing long-term statistics of weather extremes. The weights, once computed can be used to simply compute weighted means and significance information from an ensemble containing multiple initial condition members from potentially co-dependent models of varying skill. Two parameters in the algorithm determine the degree to which model climatological skill and model uniqueness are rewarded; these parameters are explored and final values are defended for the assessment. The influence of model weighting on projected temperature and precipitation changes is found to be moderate, partly due to a compensating effect between model skill and uniqueness. However, more aggressive skill weighting and weighting by targeted metrics is found to have a more significant effect on inferred ensemble confidence in future patterns of change for a given projection.

  2. Differential ethnic associations between maternal flexibility and play sophistication in toddlers born very low birth weight

    Science.gov (United States)

    Erickson, Sarah J.; Montague, Erica Q.; Maclean, Peggy C.; Bancroft, Mary E.; Lowe, Jean R.

    2013-01-01

    Children born very low birth weight (development of self-regulation and effective functional skills, and play serves as an important avenue of early intervention. The current study investigated associations between maternal flexibility and toddler play sophistication in Caucasian, Spanish speaking Hispanic, English speaking Hispanic, and Native American toddlers (18-22 months adjusted age) in a cross-sectional cohort of 73 toddlers born VLBW and their mothers. We found that the association between maternal flexibility and toddler play sophistication differed by ethnicity (F(3,65) = 3.34, p = .02). In particular, Spanish speaking Hispanic dyads evidenced a significant positive association between maternal flexibility and play sophistication of medium effect size. Results for Native Americans were parallel to those of Spanish speaking Hispanic dyads: the relationship between flexibility and play sophistication was positive and of small-medium effect size. Findings indicate that for Caucasians and English speaking Hispanics, flexibility evidenced a non-significant (negative and small effect size) association with toddler play sophistication. Significant follow-up contrasts revealed that the associations for Caucasian and English speaking Hispanic dyads were significantly different from those of the other two ethnic groups. Results remained unchanged after adjusting for the amount of maternal language, an index of maternal engagement and stimulation; and after adjusting for birth weight, gestational age, gender, test age, cognitive ability, as well maternal age, education, and income. Our results provide preliminary evidence that ethnicity and acculturation may mediate the association between maternal interactive behavior such as flexibility and toddler developmental outcomes, as indexed by play sophistication. Addressing these association differences is particularly important in children born VLBW because interventions targeting parent interaction strategies such as

  3. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...

  4. Cognitive ability rivals the effect of political sophistication on ideological voting

    DEFF Research Database (Denmark)

    Hebbelstrup Rye Rasmussen, Stig

    2016-01-01

    This article examines the impact of cognitive ability on ideological voting. We find, using a US sample and a Danish sample, that the effect of cognitive ability rivals the effect of the traditionally strongest predicter of ideological voting political sophistication. Furthermore, the results...... are consistent with the effect of cognitive ability being partly mediated by political sophistication. Much of the effect of cognitive ability remains however and is not explained by differences in education or Openness to experience either. The implications of these results for democratic theory are discussed....

  5. Assessing Epistemic Sophistication by Considering Domain-Specific Absolute and Multiplicistic Beliefs Separately

    Science.gov (United States)

    Peter, Johannes; Rosman, Tom; Mayer, Anne-Kathrin; Leichner, Nikolas; Krampen, Günter

    2016-01-01

    Background: Particularly in higher education, not only a view of science as a means of finding absolute truths (absolutism), but also a view of science as generally tentative (multiplicism) can be unsophisticated and obstructive for learning. Most quantitative epistemic belief inventories neglect this and understand epistemic sophistication as…

  6. The Relationship between Logistics Sophistication and Drivers of the Outsourcing of Logistics Activities

    Directory of Open Access Journals (Sweden)

    Peter Wanke

    2008-10-01

    Full Text Available A strong link has been established between operational excellence and the degree of sophistication of logistics organization, a function of factors such as performance monitoring, investment in Information Technology [IT] and the formalization of logistics organization, as proposed in the Bowersox, Daugherty, Dröge, Germain and Rogers (1992 Leading Edge model. At the same time, shippers have been increasingly outsourcing their logistics activities to third party providers. This paper, based on a survey with large Brazilian shippers, addresses a gap in the literature by investigating the relationship between dimensions of logistics organization sophistication and drivers of logistics outsourcing. To this end, the dimensions behind the logistics sophistication construct were first investigated. Results from factor analysis led to the identification of six dimensions of logistics sophistication. By means of multivariate logistical regression analyses it was possible to relate some of these dimensions, such as the formalization of the logistics organization, to certain drivers of the outsourcing of logistics activities of Brazilian shippers, such as cost savings. These results indicate the possibility of segmenting shippers according to characteristics of their logistics organization, which may be particularly useful to logistics service providers.

  7. Reacting to Neighborhood Cues?: Political Sophistication Moderates the Effect of Exposure to Immigrants

    DEFF Research Database (Denmark)

    Danckert, Bolette; Dinesen, Peter Thisted; Sønderskov, Kim Mannemar

    2017-01-01

    is founded on politically sophisticated individuals having a greater comprehension of news and other mass-mediated sources, which makes them less likely to rely on neighborhood cues as sources of information relevant for political attitudes. Based on a unique panel data set with fine-grained information...

  8. Sophistic Ethics in the Technical Writing Classroom: Teaching "Nomos," Deliberation, and Action.

    Science.gov (United States)

    Scott, J. Blake

    1995-01-01

    Claims that teaching ethics is particularly important to technical writing. Outlines a classical, sophistic approach to ethics based on the theories and pedagogies of Protagoras, Gorgias, and Isocrates, which emphasizes the Greek concept of "nomos," internal and external deliberation, and responsible action. Discusses problems and…

  9. Close to the Clothes : Materiality and Sophisticated Archaism in Alexander van Slobbe’s Design Practices

    NARCIS (Netherlands)

    Baronian, M.-A.

    This article looks at the work of contemporary Dutch fashion designer Alexander van Slobbe (1959) and examines how, since the 1990s, his fashion practices have consistently and consciously put forward a unique reflection on questions related to materiality, sophisticated archaism, luxury,

  10. Close to the Clothes: Materiality and Sophisticated Archaism in Alexander van Slobbe’s Design Practices

    NARCIS (Netherlands)

    Baronian, M.-A.

    This article looks at the work of contemporary Dutch fashion designer Alexander van Slobbe (1959) and examines how, since the 1990s, his fashion practices have consistently and consciously put forward a unique reflection on questions related to materiality, sophisticated archaism, luxury,

  11. Lexical Complexity Development from Dynamic Systems Theory Perspective: Lexical Density, Diversity, and Sophistication

    Directory of Open Access Journals (Sweden)

    Reza Kalantari

    2017-10-01

    Full Text Available This longitudinal case study explored Iranian EFL learners’ lexical complexity (LC through the lenses of Dynamic Systems Theory (DST. Fifty independent essays written by five intermediate to advanced female EFL learners in a TOEFL iBT preparation course over six months constituted the corpus of this study. Three Coh-Metrix indices (Graesser, McNamara, Louwerse, & Cai, 2004; McNamara & Graesser, 2012, three Lexical Complexity Analyzer indices (Lu, 2010, 2012; Lu & Ai, 2011, and four Vocabprofile indices (Cobb, 2000 were selected to measure different dimensions of LC. Results of repeated measures analysis of variance (RM ANOVA indicated an improvement with regard to only lexical sophistication. Positive and significant relationships were found between time and mean values in Academic Word List and Beyond-2000 as indicators of lexical sophistication. The remaining seven indices of LC, falling short of significance, tended to flatten over the course of this writing program. Correlation analyses among LC indices indicated that lexical density enjoyed positive correlations with lexical sophistication. However, lexical diversity revealed no significant correlations with both lexical density and lexical sophistication. This study suggests that DST perspective specifies a viable foundation for analyzing lexical complexity

  12. Does a more sophisticated storm erosion model improve probabilistic erosion estimates?

    NARCIS (Netherlands)

    Ranasinghe, R.W.M.R.J.B.; Callaghan, D.; Roelvink, D.

    2013-01-01

    The dependency between the accuracy/uncertainty of storm erosion exceedance estimates obtained via a probabilistic model and the level of sophistication of the structural function (storm erosion model) embedded in the probabilistic model is assessed via the application of Callaghan et al.'s (2008)

  13. A multi-model analysis of vertical ozone profiles

    Directory of Open Access Journals (Sweden)

    J. E. Jonson

    2010-06-01

    Full Text Available A multi-model study of the long-range transport of ozone and its precursors from major anthropogenic source regions was coordinated by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP under the Convention on Long-range Transboundary Air Pollution (LRTAP. Vertical profiles of ozone at 12-h intervals from 2001 are available from twelve of the models contributing to this study and are compared here with observed profiles from ozonesondes. The contributions from each major source region are analysed for selected sondes, and this analysis is supplemented by retroplume calculations using the FLEXPART Lagrangian particle dispersion model to provide insight into the origin of ozone transport events and the cause of differences between the models and observations.

    In the boundary layer ozone levels are in general strongly affected by regional sources and sinks. With a considerably longer lifetime in the free troposphere, ozone here is to a much larger extent affected by processes on a larger scale such as intercontinental transport and exchange with the stratosphere. Such individual events are difficult to trace over several days or weeks of transport. This may explain why statistical relationships between models and ozonesonde measurements are far less satisfactory than shown in previous studies for surface measurements at all seasons. The lowest bias between model-calculated ozone profiles and the ozonesonde measurements is seen in the winter and autumn months. Following the increase in photochemical activity in the spring and summer months, the spread in model results increases, and the agreement between ozonesonde measurements and the individual models deteriorates further.

    At selected sites calculated contributions to ozone levels in the free troposphere from intercontinental transport are shown. Intercontinental transport is identified based on differences in model calculations with unperturbed emissions and

  14. A multi-model approach to X-ray pulsars

    Directory of Open Access Journals (Sweden)

    Schönherr G.

    2014-01-01

    Full Text Available The emission characteristics of X-ray pulsars are governed by magnetospheric accretion within the Alfvén radius, leading to a direct coupling of accretion column properties and interactions at the magnetosphere. The complexity of the physical processes governing the formation of radiation within the accreted, strongly magnetized plasma has led to several sophisticated theoretical modelling efforts over the last decade, dedicated to either the formation of the broad band continuum, the formation of cyclotron resonance scattering features (CRSFs or the formation of pulse profiles. While these individual approaches are powerful in themselves, they quickly reach their limits when aiming at a quantitative comparison to observational data. Too many fundamental parameters, describing the formation of the accretion columns and the systems’ overall geometry are unconstrained and different models are often based on different fundamental assumptions, while everything is intertwined in the observed, highly phase-dependent spectra and energy-dependent pulse profiles. To name just one example: the (phase variable line width of the CRSFs is highly dependent on the plasma temperature, the existence of B-field gradients (geometry and observation angle, parameters which, in turn, drive the continuum radiation and are driven by the overall two-pole geometry for the light bending model respectively. This renders a parallel assessment of all available spectral and timing information by a compatible across-models-approach indispensable. In a collaboration of theoreticians and observers, we have been working on a model unification project over the last years, bringing together theoretical calculations of the Comptonized continuum, Monte Carlo simulations and Radiation Transfer calculations of CRSFs as well as a General Relativity (GR light bending model for ray tracing of the incident emission pattern from both magnetic poles. The ultimate goal is to implement a

  15. Sophisticated Fowl: The Complex Behaviour and Cognitive Skills of Chickens and Red Junglefowl

    Directory of Open Access Journals (Sweden)

    Laura Garnham

    2018-01-01

    Full Text Available The world’s most numerous bird, the domestic chicken, and their wild ancestor, the red junglefowl, have long been used as model species for animal behaviour research. Recently, this research has advanced our understanding of the social behaviour, personality, and cognition of fowl, and demonstrated their sophisticated behaviour and cognitive skills. Here, we overview some of this research, starting with describing research investigating the well-developed senses of fowl, before presenting how socially and cognitively complex they can be. The realisation that domestic chickens, our most abundant production animal, are behaviourally and cognitively sophisticated should encourage an increase in general appraise and fascination towards them. In turn, this should inspire increased use of them as both research and hobby animals, as well as improvements in their unfortunately often poor welfare.

  16. The relation between maturity and sophistication shall be properly dealt with in nuclear power development

    International Nuclear Information System (INIS)

    Li Yongjiang

    2009-01-01

    The paper analyses the advantages and disadvantages of the second generation improved technologies and third generation technologies mainly developed in China in terms of safety and economy. The paper also discusses the maturity of the second generation improved technologies and the sophistication of the third generation technologies respectively. Meanwhile, the paper proposes that the advantage and disadvantage of second generation improved technologies and third generation technologies should be carefully taken into consideration and the relationship between the maturity and sophistication should be properly dealt with in the current stage. A two-step strategy shall be taken as a solution to solve the problem of insufficient capacity of nuclear power, trace and develop the third generation technologies, so as to ensure the sound and fast development of nuclear power. (authors)

  17. Financial Sophistication and the Distribution of the Welfare Cost of Inflation

    OpenAIRE

    Paola Boel; Gabriele Camera

    2009-01-01

    The welfare cost of anticipated inflation is quantified in a calibrated model of the U.S. economy that exhibits tractable equilibrium dispersion in wealth and earnings. Inflation does not generate large losses in societal welfare, yet its impact varies noticeably across segments of society depending also on the financial sophistication of the economy. If money is the only asset, then inflation hurts mostly the wealthier and more productive agents, while those poorer and less productive may ev...

  18. Putin’s Russia: Russian Mentality and Sophisticated Imperialism in Military Policies

    OpenAIRE

    Szénási, Lieutenant-Colonel Endre

    2016-01-01

    According to my experiences, the Western world hopelessly fails to understand Russian mentality, or misinterprets it. During my analysis of the Russian way of thinking I devoted special attention to the examination of military mentality. I have connected the issue of the Russian way of thinking to the contemporary imperial policies of Putin’s Russia.  I have also attempted to prove the level of sophistication of both. I hope that a better understanding of both the Russian mentality and imperi...

  19. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  20. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  1. Aerosols at the poles: an AeroCom Phase II multi-model evaluation

    Directory of Open Access Journals (Sweden)

    M. Sand

    2017-10-01

    Full Text Available Atmospheric aerosols from anthropogenic and natural sources reach the polar regions through long-range transport and affect the local radiation balance. Such transport is, however, poorly constrained in present-day global climate models, and few multi-model evaluations of polar anthropogenic aerosol radiative forcing exist. Here we compare the aerosol optical depth (AOD at 550 nm from simulations with 16 global aerosol models from the AeroCom Phase II model intercomparison project with available observations at both poles. We show that the annual mean multi-model median is representative of the observations in Arctic, but that the intermodel spread is large. We also document the geographical distribution and seasonal cycle of the AOD for the individual aerosol species: black carbon (BC from fossil fuel and biomass burning, sulfate, organic aerosols (OAs, dust, and sea-salt. For a subset of models that represent nitrate and secondary organic aerosols (SOAs, we document the role of these aerosols at high latitudes.The seasonal dependence of natural and anthropogenic aerosols differs with natural aerosols peaking in winter (sea-salt and spring (dust, whereas AOD from anthropogenic aerosols peaks in late spring and summer. The models produce a median annual mean AOD of 0.07 in the Arctic (defined here as north of 60° N. The models also predict a noteworthy aerosol transport to the Antarctic (south of 70° S with a resulting AOD varying between 0.01 and 0.02. The models have estimated the shortwave anthropogenic radiative forcing contributions to the direct aerosol effect (DAE associated with BC and OA from fossil fuel and biofuel (FF, sulfate, SOAs, nitrate, and biomass burning from BC and OA emissions combined. The Arctic modelled annual mean DAE is slightly negative (−0.12 W m−2, dominated by a positive BC FF DAE in spring and a negative sulfate DAE in summer. The Antarctic DAE is governed by BC FF. We perform sensitivity

  2. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  3. Do organizations adopt sophisticated capital budgeting practices to deal with uncertainty in the investment decision? : A research note

    NARCIS (Netherlands)

    Verbeeten, Frank H M

    This study examines the impact of uncertainty on the sophistication of capital budgeting practices. While the theoretical applications of sophisticated capital budgeting practices (defined as the use of real option reasoning and/or game theory decision rules) have been well documented, empirical

  4. "SOCRATICS" AS ADDRESSES OF ISOCRATES’ EPIDEICTIC SPEECHES (Against the Sophists, Encomium of Helen, Busiris

    Directory of Open Access Journals (Sweden)

    Anna Usacheva

    2012-06-01

    Full Text Available This article analyses the three epideictic orations of Isocrates which are in themselves a precious testimony of the quality of intellectual life at the close of the fourth century before Christ. To this period belong also the Socratics who are generally seen as an important link between Socrates and Plato. The author of this article proposes a more productive approach to the study of Antisthenes, Euclid of Megara and other so-called Socratics, revealing them not as independent thinkers but rather as adherents of the sophistic school and also as teachers, thereby, including them among those who took part in the educative activity of their time

  5. Low Level RF Including a Sophisticated Phase Control System for CTF3

    CERN Document Server

    Mourier, J; Nonglaton, J M; Syratchev, I V; Tanner, L

    2004-01-01

    CTF3 (CLIC Test Facility 3), currently under construction at CERN, is a test facility designed to demonstrate the key feasibility issues of the CLIC (Compact LInear Collider) two-beam scheme. When completed, this facility will consist of a 150 MeV linac followed by two rings for bunch-interleaving, and a test stand where 30 GHz power will be generated. In this paper, the work that has been carried out on the linac's low power RF system is described. This includes, in particular, a sophisticated phase control system for the RF pulse compressor to produce a flat-top rectangular pulse over 1.4 µs.

  6. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  7. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    Science.gov (United States)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  8. Initial assessment of a multi-model approach to spring flood forecasting in Sweden

    Science.gov (United States)

    Olsson, J.; Uvo, C. B.; Foster, K.; Yang, W.

    2015-06-01

    Hydropower is a major energy source in Sweden and proper reservoir management prior to the spring flood onset is crucial for optimal production. This requires useful forecasts of the accumulated discharge in the spring flood period (i.e. the spring-flood volume, SFV). Today's SFV forecasts are generated using a model-based climatological ensemble approach, where time series of precipitation and temperature from historical years are used to force a calibrated and initialised set-up of the HBV model. In this study, a number of new approaches to spring flood forecasting, that reflect the latest developments with respect to analysis and modelling on seasonal time scales, are presented and evaluated. Three main approaches, represented by specific methods, are evaluated in SFV hindcasts for three main Swedish rivers over a 10-year period with lead times between 0 and 4 months. In the first approach, historically analogue years with respect to the climate in the period preceding the spring flood are identified and used to compose a reduced ensemble. In the second, seasonal meteorological ensemble forecasts are used to drive the HBV model over the spring flood period. In the third approach, statistical relationships between SFV and the large-sale atmospheric circulation are used to build forecast models. None of the new approaches consistently outperform the climatological ensemble approach, but for specific locations and lead times improvements of 20-30 % are found. When combining all forecasts in a weighted multi-model approach, a mean improvement over all locations and lead times of nearly 10 % was indicated. This demonstrates the potential of the approach and further development and optimisation into an operational system is ongoing.

  9. An application of ensemble/multi model approach for wind power production forecasting

    Science.gov (United States)

    Alessandrini, S.; Pinson, P.; Hagedorn, R.; Decimi, G.; Sperati, S.

    2011-02-01

    The wind power forecasts of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast applied in this study is based on meteorological models that provide the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. For this purpose a training of a Neural Network (NN) to link directly the forecasted meteorological data and the power data has been performed. One wind farm has been examined located in a mountain area in the south of Italy (Sicily). First we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by the combination of models (RAMS, ECMWF deterministic, LAMI). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error (normalized by nominal power) of at least 1% compared to the singles models approach. Finally we have focused on the possibility of using the ensemble model system (EPS by ECMWF) to estimate the hourly, three days ahead, power forecast accuracy. Contingency diagram between RMSE of the deterministic power forecast and the ensemble members spread of wind forecast have been produced. From this first analysis it seems that ensemble spread could be used as an indicator of the forecast's accuracy at least for the first three days ahead period.

  10. Sophisticated approval voting, ignorance priors, and plurality heuristics: a behavioral social choice analysis in a Thurstonian framework.

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R; Tsetlin, Ilia

    2007-10-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two types of plurality heuristics to model approval voting behavior. When using a sincere plurality heuristic, voters simplify their decision process by voting for their single favorite candidate. When using a strategic plurality heuristic, voters strategically focus their attention on the 2 front-runners and vote for their preferred candidate among these 2. Using a hierarchy of Thurstonian random utility models, the authors implemented these different decision rules and tested them statistically on 7 real world approval voting elections. They cross-validated their key findings via a psychological Internet experiment. Although a substantial number of voters used the plurality heuristic in the real elections, they did so sincerely, not strategically. Moreover, even though Thurstonian models do not force such agreement, the results show, in contrast to common wisdom about social choice rules, that the sincere social orders by Condorcet, Borda, plurality, and approval voting are identical in all 7 elections and in the Internet experiment. PsycINFO Database Record (c) 2007 APA, all rights reserved.

  11. Medium-range reference evapotranspiration forecasts for the contiguous United States based on multi-model numerical weather predictions

    Science.gov (United States)

    Medina, Hanoi; Tian, Di; Srivastava, Puneet; Pelosi, Anna; Chirico, Giovanni B.

    2018-07-01

    Reference evapotranspiration (ET0) plays a fundamental role in agronomic, forestry, and water resources management. Estimating and forecasting ET0 have long been recognized as a major challenge for researchers and practitioners in these communities. This work explored the potential of multiple leading numerical weather predictions (NWPs) for estimating and forecasting summer ET0 at 101 U.S. Regional Climate Reference Network stations over nine climate regions across the contiguous United States (CONUS). Three leading global NWP model forecasts from THORPEX Interactive Grand Global Ensemble (TIGGE) dataset were used in this study, including the single model ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (EC), the National Centers for Environmental Prediction Global Forecast System (NCEP), and the United Kingdom Meteorological Office forecasts (MO), as well as multi-model ensemble forecasts from the combinations of these NWP models. A regression calibration was employed to bias correct the ET0 forecasts. Impact of individual forecast variables on ET0 forecasts were also evaluated. The results showed that the EC forecasts provided the least error and highest skill and reliability, followed by the MO and NCEP forecasts. The multi-model ensembles constructed from the combination of EC and MO forecasts provided slightly better performance than the single model EC forecasts. The regression process greatly improved ET0 forecast performances, particularly for the regions involving stations near the coast, or with a complex orography. The performance of EC forecasts was only slightly influenced by the size of the ensemble members, particularly at short lead times. Even with less ensemble members, EC still performed better than the other two NWPs. Errors in the radiation forecasts, followed by those in the wind, had the most detrimental effects on the ET0 forecast performances.

  12. An Interactive Multi-Model for Consensus on Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Kocarev, Ljupco [University of California, San Diego

    2014-07-02

    This project purports to develop a new scheme for forming consensus among alternative climate models, that give widely divergent projections as to the details of climate change, that is more intelligent than simply averaging the model outputs, or averaging with ex post facto weighting factors. The method under development effectively allows models to assimilate data from one another in run time with weights that are chosen in an adaptive training phase using 20th century data, so that the models synchronize with one another as well as with reality. An alternate approach that is being explored in parallel is the automated combination of equations from different models in an expert-system-like framework.

  13. [Research progress of multi-model medical image fusion and recognition].

    Science.gov (United States)

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  14. xSyn: A Software Tool for Identifying Sophisticated 3-Way Interactions From Cancer Expression Data

    Directory of Open Access Journals (Sweden)

    Baishali Bandyopadhyay

    2017-08-01

    Full Text Available Background: Constructing gene co-expression networks from cancer expression data is important for investigating the genetic mechanisms underlying cancer. However, correlation coefficients or linear regression models are not able to model sophisticated relationships among gene expression profiles. Here, we address the 3-way interaction that 2 genes’ expression levels are clustered in different space locations under the control of a third gene’s expression levels. Results: We present xSyn, a software tool for identifying such 3-way interactions from cancer gene expression data based on an optimization procedure involving the usage of UPGMA (Unweighted Pair Group Method with Arithmetic Mean and synergy. The effectiveness is demonstrated by application to 2 real gene expression data sets. Conclusions: xSyn is a useful tool for decoding the complex relationships among gene expression profiles. xSyn is available at http://www.bdxconsult.com/xSyn.html .

  15. When not to copy: female fruit flies use sophisticated public information to avoid mated males

    Science.gov (United States)

    Loyau, Adeline; Blanchet, Simon; van Laere, Pauline; Clobert, Jean; Danchin, Etienne

    2012-10-01

    Semen limitation (lack of semen to fertilize all of a female's eggs) imposes high fitness costs to female partners. Females should therefore avoid mating with semen-limited males. This can be achieved by using public information extracted from watching individual males' previous copulating activities. This adaptive preference should be flexible given that semen limitation is temporary. We first demonstrate that the number of offspring produced by males Drosophila melanogaster gradually decreases over successive copulations. We then show that females avoid mating with males they just watched copulating and that visual public cues are sufficient to elicit this response. Finally, after males were given the time to replenish their sperm reserves, females did not avoid the males they previously saw copulating anymore. These results suggest that female fruit flies may have evolved sophisticated behavioural processes of resistance to semen-limited males, and demonstrate unsuspected adaptive context-dependent mate choice in an invertebrate.

  16. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Science.gov (United States)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    application of the EDA and SDA approaches revealed their capability to distinguish: (i) stronger yield responses to precipitation for winter wheat than spring wheat; (ii) differing strengths of response to climate changes for years with anomalous weather conditions compared to period-average conditions; (iii) the influence of site conditions on yield patterns; (iv) similarities in IRS patterns among models with related genealogy; (v) similarities in IRS patterns for models with simpler process descriptions of root growth and water uptake compared to those with more complex descriptions; and (vi) a closer correspondence of IRS patterns in models using partitioning schemes to represent yield formation than in those using a harvest index. Such results can inform future crop modelling studies that seek to exploit the diversity of multi-model ensembles, by distinguishing ensemble members that span a wide range of responses as well as those that display implausible behaviour or strong mutual similarities.

  17. Multi-model ensemble projections of European river floods and high flows at 1.5, 2, and 3 degrees global warming

    Science.gov (United States)

    Thober, Stephan; Kumar, Rohini; Wanders, Niko; Marx, Andreas; Pan, Ming; Rakovec, Oldrich; Samaniego, Luis; Sheffield, Justin; Wood, Eric F.; Zink, Matthias

    2018-01-01

    Severe river floods often result in huge economic losses and fatalities. Since 1980, almost 1500 such events have been reported in Europe. This study investigates climate change impacts on European floods under 1.5, 2, and 3 K global warming. The impacts are assessed employing a multi-model ensemble containing three hydrologic models (HMs: mHM, Noah-MP, PCR-GLOBWB) forced by five CMIP5 general circulation models (GCMs) under three Representative Concentration Pathways (RCPs 2.6, 6.0, and 8.5). This multi-model ensemble is unprecedented with respect to the combination of its size (45 realisations) and its spatial resolution, which is 5 km over the entirety of Europe. Climate change impacts are quantified for high flows and flood events, represented by 10% exceedance probability and annual maxima of daily streamflow, respectively. The multi-model ensemble points to the Mediterranean region as a hotspot of changes with significant decrements in high flows from -11% at 1.5 K up to -30% at 3 K global warming mainly resulting from reduced precipitation. Small changes (impacts of global warming could be similar under 1.5 K and 2 K global warming, but have to account for significantly higher changes under 3 K global warming.

  18. Uncertainty Analysis of Multi-Model Flood Forecasts

    Directory of Open Access Journals (Sweden)

    Erich J. Plate

    2015-12-01

    Full Text Available This paper demonstrates, by means of a systematic uncertainty analysis, that the use of outputs from more than one model can significantly improve conditional forecasts of discharges or water stages, provided the models are structurally different. Discharge forecasts from two models and the actual forecasted discharge are assumed to form a three-dimensional joint probability density distribution (jpdf, calibrated on long time series of data. The jpdf is decomposed into conditional probability density distributions (cpdf by means of Bayes formula, as suggested and explored by Krzysztofowicz in a series of papers. In this paper his approach is simplified to optimize conditional forecasts for any set of two forecast models. Its application is demonstrated by means of models developed in a study of flood forecasting for station Stung Treng on the middle reach of the Mekong River in South-East Asia. Four different forecast models were used and pairwise combined: forecast with no model, with persistence model, with a regression model, and with a rainfall-runoff model. Working with cpdfs requires determination of dependency among variables, for which linear regressions are required, as was done by Krzysztofowicz. His Bayesian approach based on transforming observed probability distributions of discharges and forecasts into normal distributions is also explored. Results obtained with his method for normal prior and likelihood distributions are identical to results from direct multiple regressions. Furthermore, it is shown that in the present case forecast accuracy is only marginally improved, if Weibull distributed basic data were converted into normally distributed variables.

  19. Multi-Model Assessment of Global Hydropower and Cooling Water Discharge Potential Under Climate Change

    Science.gov (United States)

    van Vliet, M. T. H.; van Beek, L. P. H.; Eisener, S.; Wada, Y.; Bierkens, M. F. P.

    2016-01-01

    Worldwide, 98% of total electricity is currently produced by thermoelectric power and hydropower. Climate change is expected to directly impact electricity supply, in terms of both water availability for hydropower generation and cooling water usage for thermoelectric power. Improved understanding of how climate change may impact the availability and temperature of water resources is therefore of major importance. Here we use a multi-model ensemble to show the potential impacts of climate change on global hydropower and cooling water discharge potential. For the first time, combined projections of streamflow and water temperature were produced with three global hydrological models (GHMs) to account for uncertainties in the structure and parametrization of these GHMs in both water availability and water temperature. The GHMs were forced with bias-corrected output of five general circulation models (GCMs) for both the lowest and highest representative concentration pathways (RCP2.6 and RCP8.5). The ensemble projections of streamflow and water temperature were then used to quantify impacts on gross hydropower potential and cooling water discharge capacity of rivers worldwide. We show that global gross hydropower potential is expected to increase between +2.4% (GCM-GHM ensemble mean for RCP 2.6) and +6.3% (RCP 8.5) for the 2080s compared to 1971-2000. The strongest increases in hydropower potential are expected for Central Africa, India, central Asia and the northern high-latitudes, with 18-33% of the world population living in these areas by the 2080s. Global mean cooling water discharge capacity is projected to decrease by 4.5-15% (2080s). The largest reductions are found for the United States, Europe, eastern Asia, and southern parts of South America, Africa and Australia, where strong water temperature increases are projected combined with reductions in mean annual streamflow. These regions are expected to affect 11-14% (for RCP2.6 and the shared socioeconomic

  20. Experimental real-time multi-model ensemble (MME) prediction of ...

    Indian Academy of Sciences (India)

    calibration (training) has to be of good quality. Otherwise, it might degrade the MME results. Early works by ... ECMWF ensemble data (Evans et al 2000), and they showed the superiority of the multi-model system over the ..... eral idea of the quality of rainfall forecasts in terms of error statistics for monsoon for the member.

  1. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  2. Exploring the predictive power of interaction terms in a sophisticated risk equalization model using regression trees.

    Science.gov (United States)

    van Veen, S H C M; van Kleef, R C; van de Ven, W P M M; van Vliet, R C J A

    2018-02-01

    This study explores the predictive power of interaction terms between the risk adjusters in the Dutch risk equalization (RE) model of 2014. Due to the sophistication of this RE-model and the complexity of the associations in the dataset (N = ~16.7 million), there are theoretically more than a million interaction terms. We used regression tree modelling, which has been applied rarely within the field of RE, to identify interaction terms that statistically significantly explain variation in observed expenses that is not already explained by the risk adjusters in this RE-model. The interaction terms identified were used as additional risk adjusters in the RE-model. We found evidence that interaction terms can improve the prediction of expenses overall and for specific groups in the population. However, the prediction of expenses for some other selective groups may deteriorate. Thus, interactions can reduce financial incentives for risk selection for some groups but may increase them for others. Furthermore, because regression trees are not robust, additional criteria are needed to decide which interaction terms should be used in practice. These criteria could be the right incentive structure for risk selection and efficiency or the opinion of medical experts. Copyright © 2017 John Wiley & Sons, Ltd.

  3. The tool for the automatic analysis of lexical sophistication (TAALES): version 2.0.

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott; Berger, Cynthia

    2017-07-11

    This study introduces the second release of the Tool for the Automatic Analysis of Lexical Sophistication (TAALES 2.0), a freely available and easy-to-use text analysis tool. TAALES 2.0 is housed on a user's hard drive (allowing for secure data processing) and is available on most operating systems (Windows, Mac, and Linux). TAALES 2.0 adds 316 indices to the original tool. These indices are related to word frequency, word range, n-gram frequency, n-gram range, n-gram strength of association, contextual distinctiveness, word recognition norms, semantic network, and word neighbors. In this study, we validated TAALES 2.0 by investigating whether its indices could be used to model both holistic scores of lexical proficiency in free writes and word choice scores in narrative essays. The results indicated that the TAALES 2.0 indices could be used to explain 58% of the variance in lexical proficiency scores and 32% of the variance in word-choice scores. Newly added TAALES 2.0 indices, including those related to n-gram association strength, word neighborhood, and word recognition norms, featured heavily in these predictor models, suggesting that TAALES 2.0 represents a substantial upgrade.

  4. The State of Nursing Home Information Technology Sophistication in Rural and Nonrural US Markets.

    Science.gov (United States)

    Alexander, Gregory L; Madsen, Richard W; Miller, Erin L; Wakefield, Douglas S; Wise, Keely K; Alexander, Rachel L

    2017-06-01

    To test for significant differences in information technology sophistication (ITS) in US nursing homes (NH) based on location. We administered a primary survey January 2014 to July 2015 to NH in each US state. The survey was cross-sectional and examined 3 dimensions (IT capabilities, extent of IT use, degree of IT integration) among 3 domains (resident care, clinical support, administrative activities) of ITS. ITS was broken down by NH location. Mean responses were compared across 4 NH categories (Metropolitan, Micropolitan, Small Town, and Rural) for all 9 ITS dimensions and domains. Least square means and Tukey's method were used for multiple comparisons. Methods yielded 815/1,799 surveys (45% response rate). In every health care domain (resident care, clinical support, and administrative activities) statistical differences in facility ITS occurred in larger (metropolitan or micropolitan) and smaller (small town or rural) populated areas. This study represents the most current national assessment of NH IT since 2004. Historically, NH IT has been used solely for administrative activities and much less for resident care and clinical support. However, results are encouraging as ITS in other domains appears to be greater than previously imagined. © 2016 National Rural Health Association.

  5. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    Science.gov (United States)

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  6. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    Science.gov (United States)

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  7. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    Directory of Open Access Journals (Sweden)

    Thomas Akam

    2015-12-01

    Full Text Available The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  8. A sophisticated simulation for the fracture behavior of concrete material using XFEM

    Science.gov (United States)

    Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili

    2017-10-01

    The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.

  9. Nurturing Opportunity Identification for Business Sophistication in a Cross-disciplinary Study Environment

    Directory of Open Access Journals (Sweden)

    Karine Oganisjana

    2012-12-01

    Full Text Available Opportunity identification is the key element of the entrepreneurial process; therefore the issue of developing this skill in students is a crucial task in contemporary European education which has recognized entrepreneurship as one of the lifelong learning key competences. The earlier opportunity identification becomes a habitual way of thinking and behavior across a broad range of contexts, the more likely that entrepreneurial disposition will steadily reside in students. In order to nurture opportunity identification in students for making them able to organize sophisticated businesses in the future, certain demands ought to be put forward as well to the teacher – the person who is to promote these qualities in their students. The paper reflects some findings of a research conducted within the frameworks of a workplace learning project for the teachers of one of Riga secondary schools (Latvia. The main goal of the project was to teach the teachers to identify hidden inner links between apparently unrelated things, phenomena and events within 10th grade study curriculum and connect them together and create new opportunities. The creation and solution of cross-disciplinary tasks were the means for achieving this goal.

  10. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    Science.gov (United States)

    Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda

    2011-03-15

    We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  11. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    Directory of Open Access Journals (Sweden)

    Hsieh Fushing

    2011-03-01

    Full Text Available We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  12. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    Science.gov (United States)

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Sophisticated Online Learning Scheme for Green Resource Allocation in 5G Heterogeneous Cloud Radio Access Networks

    KAUST Repository

    Alqerm, Ismail

    2018-01-23

    5G is the upcoming evolution for the current cellular networks that aims at satisfying the future demand for data services. Heterogeneous cloud radio access networks (H-CRANs) are envisioned as a new trend of 5G that exploits the advantages of heterogeneous and cloud radio access networks to enhance spectral and energy efficiency. Remote radio heads (RRHs) are small cells utilized to provide high data rates for users with high quality of service (QoS) requirements, while high power macro base station (BS) is deployed for coverage maintenance and low QoS users service. Inter-tier interference between macro BSs and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRANs. Therefore, we propose an efficient resource allocation scheme using online learning, which mitigates interference and maximizes energy efficiency while maintaining QoS requirements for all users. The resource allocation includes resource blocks (RBs) and power. The proposed scheme is implemented using two approaches: centralized, where the resource allocation is processed at a controller integrated with the baseband processing unit and decentralized, where macro BSs cooperate to achieve optimal resource allocation strategy. To foster the performance of such sophisticated scheme with a model free learning, we consider users\\' priority in RB allocation and compact state representation learning methodology to improve the speed of convergence and account for the curse of dimensionality during the learning process. The proposed scheme including both approaches is implemented using software defined radios testbed. The obtained results and simulation results confirm that the proposed resource allocation solution in H-CRANs increases the energy efficiency significantly and maintains users\\' QoS.

  14. Multi-disciplinary communication networks for skin risk assessment in nursing homes with high IT sophistication.

    Science.gov (United States)

    Alexander, Gregory L; Pasupathy, Kalyan S; Steege, Linsey M; Strecker, E Bradley; Carley, Kathleen M

    2014-08-01

    The role of nursing home (NH) information technology (IT) in quality improvement has not been clearly established, and its impacts on communication between care givers and patient outcomes in these settings deserve further attention. In this research, we describe a mixed method approach to explore communication strategies used by healthcare providers for resident skin risk in NH with high IT sophistication (ITS). Sample included NH participating in the statewide survey of ITS. We incorporated rigorous observation of 8- and 12-h shifts, and focus groups to identify how NH IT and a range of synchronous and asynchronous tools are used. Social network analysis tools and qualitative analysis were used to analyze data and identify relationships between ITS dimensions and communication interactions between care providers. Two of the nine ITS dimensions (resident care-technological and administrative activities-technological) and total ITS were significantly negatively correlated with number of unique interactions. As more processes in resident care and administrative activities are supported by technology, the lower the number of observed unique interactions. Additionally, four thematic areas emerged from staff focus groups that demonstrate how important IT is to resident care in these facilities including providing resident-centered care, teamwork and collaboration, maintaining safety and quality, and using standardized information resources. Our findings in this study confirm prior research that as technology support (resident care and administrative activities) and overall ITS increases, observed interactions between staff members decrease. Conversations during staff interviews focused on how technology facilitated resident centered care through enhanced information sharing, greater virtual collaboration between team members, and improved care delivery. These results provide evidence for improving the design and implementation of IT in long term care systems to support

  15. Robust multi-model control of an autonomous wind power system

    Energy Technology Data Exchange (ETDEWEB)

    Cutululis, Nicolas Antonio; Hansen, Anca Daniela; Soerensen, Poul [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Ceanga, Emil [' Dunarea de Jos' Univ., Faculty of Electrical Engineering, Galati (Romania)

    2006-07-01

    This article presents a robust multi-model control structure for a wind power system that uses a variable speed wind turbine (VSWT) driving a permanent magnet synchronous generator (PMSG) connected to a local grid. The control problem consists in maximizing the energy captured from the wind for varying wind speeds. The VSWT-PMSG linearized model analysis reveals the resonant nature of its dynamic at points on the optimal regimes characteristic (ORC). The natural frequency of the system and the damping factor are strongly dependent on the operating point on the ORC. Under these circumstances a robust multi-model control structure is designed. The simulation results prove the viability of the proposed control structure. (Author)

  16. Robust multi-model control of an autonomous wind power system

    Science.gov (United States)

    Cutululis, Nicolas Antonio; Ceanga, Emil; Hansen, Anca Daniela; Sørensen, Poul

    2006-09-01

    This article presents a robust multi-model control structure for a wind power system that uses a variable speed wind turbine (VSWT) driving a permanent magnet synchronous generator (PMSG) connected to a local grid. The control problem consists in maximizing the energy captured from the wind for varying wind speeds. The VSWT-PMSG linearized model analysis reveals the resonant nature of its dynamic at points on the optimal regimes characteristic (ORC). The natural frequency of the system and the damping factor are strongly dependent on the operating point on the ORC. Under these circumstances a robust multi-model control structure is designed. The simulation results prove the viability of the proposed control structure. Copyright

  17. A Multi-Model Stereo Similarity Function Based on Monogenic Signal Analysis in Poisson Scale Space

    Directory of Open Access Journals (Sweden)

    Jinjun Li

    2011-01-01

    Full Text Available A stereo similarity function based on local multi-model monogenic image feature descriptors (LMFD is proposed to match interest points and estimate disparity map for stereo images. Local multi-model monogenic image features include local orientation and instantaneous phase of the gray monogenic signal, local color phase of the color monogenic signal, and local mean colors in the multiscale color monogenic signal framework. The gray monogenic signal, which is the extension of analytic signal to gray level image using Dirac operator and Laplace equation, consists of local amplitude, local orientation, and instantaneous phase of 2D image signal. The color monogenic signal is the extension of monogenic signal to color image based on Clifford algebras. The local color phase can be estimated by computing geometric product between the color monogenic signal and a unit reference vector in RGB color space. Experiment results on the synthetic and natural stereo images show the performance of the proposed approach.

  18. MVL spatiotemporal analysis for model intercomparison in EPS: application to the DEMETER multi-model ensemble

    Science.gov (United States)

    Fernández, J.; Primo, C.; Cofiño, A. S.; Gutiérrez, J. M.; Rodríguez, M. A.

    2009-08-01

    In a recent paper, Gutiérrez et al. (Nonlinear Process Geophys 15(1):109-114, 2008) introduced a new characterization of spatiotemporal error growth—the so called mean-variance logarithmic (MVL) diagram—and applied it to study ensemble prediction systems (EPS); in particular, they analyzed single-model ensembles obtained by perturbing the initial conditions. In the present work, the MVL diagram is applied to multi-model ensembles analyzing also the effect of model formulation differences. To this aim, the MVL diagram is systematically applied to the multi-model ensemble produced in the EU-funded DEMETER project. It is shown that the shared building blocks (atmospheric and ocean components) impose similar dynamics among different models and, thus, contribute to poorly sampling the model formulation uncertainty. This dynamical similarity should be taken into account, at least as a pre-screening process, before applying any objective weighting method.

  19. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    DEFF Research Database (Denmark)

    Thomsen, Nanna Isbak; Troldborg, Mads; McKnight, Ursula S.

    2012-01-01

    site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level...... the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We...... propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same...

  20. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles perform reconstruction?

    OpenAIRE

    Saito, K.; Sueyoshi, T.; Marchenko, S.; Romanovsky, V.; Otto-Bliesner, B.; Walsh, J.; Bigelow, N.; Hendricks, A.; Yoshikawa, K.

    2013-01-01

    Here, global-scale frozen ground distribution from the Last Glacial Maximum (LGM) has been reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present day (pre-industrial; 0 kya) an...

  1. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles reconstruct?

    OpenAIRE

    K. Saito; T. Sueyoshi; S. Marchenko; V. Romanovsky; B. Otto-Bliesner; J. Walsh; N. Bigelow; A. Hendricks; K. Yoshikawa

    2013-01-01

    Global-scale frozen ground distribution during the Last Glacial Maximum (LGM) was reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project Phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present-day (pre-industrial; 0 k) and the LGM (...

  2. Predicting lymphatic filariasis transmission and elimination dynamics using a multi-model ensemble framework

    Directory of Open Access Journals (Sweden)

    Morgan E. Smith

    2017-03-01

    Full Text Available Mathematical models of parasite transmission provide powerful tools for assessing the impacts of interventions. Owing to complexity and uncertainty, no single model may capture all features of transmission and elimination dynamics. Multi-model ensemble modelling offers a framework to help overcome biases of single models. We report on the development of a first multi-model ensemble of three lymphatic filariasis (LF models (EPIFIL, LYMFASIM, and TRANSFIL, and evaluate its predictive performance in comparison with that of the constituents using calibration and validation data from three case study sites, one each from the three major LF endemic regions: Africa, Southeast Asia and Papua New Guinea (PNG. We assessed the performance of the respective models for predicting the outcomes of annual MDA strategies for various baseline scenarios thought to exemplify the current endemic conditions in the three regions. The results show that the constructed multi-model ensemble outperformed the single models when evaluated across all sites. Single models that best fitted calibration data tended to do less well in simulating the out-of-sample, or validation, intervention data. Scenario modelling results demonstrate that the multi-model ensemble is able to compensate for variance between single models in order to produce more plausible predictions of intervention impacts. Our results highlight the value of an ensemble approach to modelling parasite control dynamics. However, its optimal use will require further methodological improvements as well as consideration of the organizational mechanisms required to ensure that modelling results and data are shared effectively between all stakeholders.

  3. Agent Based Fuzzy T-S Multi-Model System and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiaopeng Zhao

    2015-11-01

    Full Text Available Based on the basic concepts of agent and fuzzy T-S model, an agent based fuzzy T-S multi-model (ABFT-SMM system is proposed in this paper. Different from the traditional method, the parameters and the membership value of the agent can be adjusted along with the process. In this system, each agent can be described as a dynamic equation, which can be seen as the local part of the multi-model, and it can execute the task alone or collaborate with other agents to accomplish a fixed goal. It is proved in this paper that the agent based fuzzy T-S multi-model system can approximate any linear or nonlinear system at arbitrary accuracy. The applications to the benchmark problem of chaotic time series prediction, water heater system and waste heat utilizing process illustrate the viability and the efficiency of the mentioned approach. At the same time, the method can be easily used to a number of engineering fields, including identification, nonlinear control, fault diagnostics and performance analysis.

  4. Reactive polymer coatings: A robust platform towards sophisticated surface engineering for biotechnology

    Science.gov (United States)

    Chen, Hsien-Yeh

    Functionalized poly(p-xylylenes) or so-called reactive polymers can be synthesized via chemical vapor deposition (CVD) polymerization. The resulting ultra-thin coatings are pinhole-free and can be conformally deposited to a wide range of substrates and materials. More importantly, the equipped functional groups can served as anchoring sites for tailoring the surface properties, making these reactive coatings a robust platform that can deal with sophisticated challenges faced in biointerfaces. In this work presented herein, surface coatings presenting various functional groups were prepared by CVD process. Such surfaces include aldehyde-functionalized coating to precisely immobilize saccharide molecules onto well-defined areas and alkyne-functionalized coating to click azide-modified molecules via Huisgen 1,3-dipolar cycloaddition reaction. Moreover, CVD copolymerization has been conducted to prepare multifunctional coatings and their specific functions were demonstrated by the immobilization of biotin and NHS-ester molecules. By using a photodefinable coating, polyethylene oxides were immobilized onto a wide range of substrates through photo-immobilization. Spatially controlled protein resistant properties were characterized by selective adsorption of fibrinogen and bovine serum albumin as model systems. Alternatively, surface initiator coatings were used for polymer graftings of polyethylene glycol) methyl ether methacrylate, and the resultant protein- and cell- resistant properties were characterized by adsorption of kinesin motor proteins, fibrinogen, and murine fibroblasts (NIH3T3). Accessibility of reactive coatings within confined microgeometries was systematically studied, and the preparation of homogeneous polymer thin films within the inner surface of microchannels was demonstrated. Moreover, these advanced coatings were applied to develop a dry adhesion process for microfluidic devices. This process provides (i) excellent bonding strength, (ii) extended

  5. Purification through Emotions: The Role of Shame in Plato's "Sophist" 230B4-E5

    Science.gov (United States)

    Candiotto, Laura

    2018-01-01

    This article proposes an analysis of Plato's "Sophist" (230b4--e5) that underlines the bond between the logical and the emotional components of the Socratic "elenchus", with the aim of depicting the social valence of this philosophical practice. The use of emotions characterizing the 'elenctic' method described by Plato is…

  6. Applying a Multi-Model Ensemble Method for Long-Term Runoff Prediction under Climate Change Scenarios for the Yellow River Basin, China

    Directory of Open Access Journals (Sweden)

    Linus Zhang

    2018-03-01

    Full Text Available Given the substantial impacts that are expected due to climate change, it is crucial that accurate rainfall–runoff results are provided for various decision-making purposes. However, these modeling results often generate uncertainty or bias due to the imperfect character of individual models. In this paper, a genetic algorithm together with a Bayesian model averaging method are employed to provide a multi-model ensemble (MME and combined runoff prediction under climate change scenarios produced from eight rainfall–runoff models for the Yellow River Basin. The results show that the multi-model ensemble method, especially the genetic algorithm method, can produce more reliable predictions than the other considered rainfall–runoff models. These results show that it is possible to reduce the uncertainty and thus improve the accuracy for future projections using different models because an MME approach evens out the bias involved in the individual model. For the study area, the final combined predictions reveal that less runoff is expected under most climatic scenarios, which will threaten water security of the basin.

  7. Multi-model ensemble projections of European river floods and high flows at 1.5, 2, and 3 degree global warming

    Science.gov (United States)

    Thober, S.; Kumar, R.; Wanders, N.; Marx, A.; Pan, M.; Rakovec, O.; Samaniego, L. E.; Sheffield, J.; Wood, E. F.; Zink, M.

    2017-12-01

    Severe river floods often result in huge economic losses and fatalities. Since 1980, almost 1500 such events have been reported in Europe. This study investigates climate change impacts on European floods under 1.5, 2, and 3 K global warming. The impacts are assessed employing a multi-model ensemble containing three hydrologic models (HMs: mHM, Noah-MP, PCR-GLOBWB) forced by five CMIP5 General Circulation Models (GCMs) under three Representative Concentration Pathways (RCPs 2.6, 6.0, and 8.5). This multi-model ensemble is unprecedented with respect to the combination of its size (45 realisations) and its spatial resolution, which is 5 km over entire Europe. Climate change impacts are quantified for high flows and flood events, represented by 10% exceedance probability and annual maxima of daily streamflow, respectively. The multi-model ensemble points to the Mediterranean region as a hotspot of changes with significant decrements in high flows from -11% at 1.5 K up to -30% at 3 K global warming mainly resulting from reduced precipitation. Small changes (< ±10%) are observed for river basins in Central Europe and the British Isles under different levels of warming. Projected higher annual precipitation increases high flows in Scandinavia, but reduced snow water equivalent decreases flood events in this region. The contribution by the GCMs to the overall uncertainties of the ensemble is in general higher than that by the HMs. The latter, however, have a substantial share of the overall uncertainty and exceed GCM uncertainty in the Mediterranean and Scandinavia. Adaptation measures for limiting the impacts of global warming could be similar under 1.5 K and 2 K global warming, but has to account for significantly higher changes under 3 K global warming.

  8. EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990–2010

    Directory of Open Access Journals (Sweden)

    A. Colette

    2017-09-01

    Full Text Available The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990–2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i anthropogenic emissions, (ii chemical boundary conditions, and (iii meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have – to date – completed the full set of simulations (and 21-year trend calculations have been performed by four models. The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i an evaluation of the models' performances for the three reference years, (ii an evaluation of the skill of the models in capturing observed air pollution trends for the 1990–2010 time period, (iii attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology, (iv a dataset based on a multi-model approach, to provide more robust model

  9. Risk assessment of agricultural water requirement based on a multi-model ensemble framework, southwest of Iran

    Science.gov (United States)

    Zamani, Reza; Akhond-Ali, Ali-Mohammad; Roozbahani, Abbas; Fattahi, Rouhollah

    2017-08-01

    Water shortage and climate change are the most important issues of sustainable agricultural and water resources development. Given the importance of water availability in crop production, the present study focused on risk assessment of climate change impact on agricultural water requirement in southwest of Iran, under two emission scenarios (A2 and B1) for the future period (2025-2054). A multi-model ensemble framework based on mean observed temperature-precipitation (MOTP) method and a combined probabilistic approach Long Ashton Research Station-Weather Generator (LARS-WG) and change factor (CF) have been used for downscaling to manage the uncertainty of outputs of 14 general circulation models (GCMs). The results showed an increasing temperature in all months and irregular changes of precipitation (either increasing or decreasing) in the future period. In addition, the results of the calculated annual net water requirement for all crops affected by climate change indicated an increase between 4 and 10 %. Furthermore, an increasing process is also expected regarding to the required water demand volume. The most and the least expected increase in the water demand volume is about 13 and 5 % for A2 and B1 scenarios, respectively. Considering the results and the limited water resources in the study area, it is crucial to provide water resources planning in order to reduce the negative effects of climate change. Therefore, the adaptation scenarios with the climate change related to crop pattern and water consumption should be taken into account.

  10. Design and implementation of space physics multi-model application integration based on web

    Science.gov (United States)

    Jiang, Wenping; Zou, Ziming

    With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into

  11. Multi-model forecast skill for mid-summer rainfall over southern Africa

    CSIR Research Space (South Africa)

    Landman, WA

    2012-02-01

    Full Text Available -model forecasts outperform the single 17 model forecasts, that the two multi-model schemes produce about equally skilful 18 forecasts, and that the forecasts perform better during El Ni?o and La Ni?a 19 seasons than during neutral years. 20 21 22 3 1... to be 19 anomalously dry during El Ni?o years and anomalously wet during La Ni?a years, 20 although wet El Ni?o seasons and dry La Ni?a seasons are not uncommon. 21 Indian and Atlantic Ocean SST also have a statistically detectable influence on 22 South...

  12. A Case Study on E - Banking Security – When Security Becomes Too Sophisticated for the User to Access Their Information

    OpenAIRE

    Aaron M. French

    2012-01-01

    While eBanking security continues to increase in sophistication to protect against threats, the usability of the eBanking decreases resulting in poor security behaviors by the users. The current research evaluates se curity risks and measures taken for eBanking solutions. A case study is presented describing how increased complexity decreases vulnerabilities online but increases vulnerabilities from internal threats and eBanking users

  13. Multi-Model Projections of River Flood Risk in Europe under Global Warming

    Directory of Open Access Journals (Sweden)

    Lorenzo Alfieri

    2018-01-01

    Full Text Available Knowledge on the costs of natural disasters under climate change is key information for planning adaptation and mitigation strategies of future climate policies. Impact models for large scale flood risk assessment have made leaps forward in the past few years, thanks to the increased availability of high resolution climate projections and of information on local exposure and vulnerability to river floods. Yet, state-of-the-art flood impact models rely on a number of input data and techniques that can substantially influence their results. This work compares estimates of river flood risk in Europe from three recent case studies, assuming global warming scenarios of 1.5, 2, and 3 degrees Celsius from pre-industrial levels. The assessment is based on comparing ensemble projections of expected damage and population affected at country level. Differences and common points between the three cases are shown, to point out main sources of uncertainty, strengths, and limitations. In addition, the multi-model comparison helps identify regions with the largest agreement on specific changes in flood risk. Results show that global warming is linked to substantial increase in flood risk over most countries in Central and Western Europe at all warming levels. In Eastern Europe, the average change in flood risk is smaller and the multi-model agreement is poorer.

  14. A diagnosis method for physical systems using a multi-modeling approach

    International Nuclear Information System (INIS)

    Thetiot, R.

    2000-01-01

    In this thesis we propose a method for diagnosis problem solving. This method is based on a multi-modeling approach describing both normal and abnormal behavior of a system. This modeling approach allows to represent a system at different abstraction levels (behavioral, functional and teleological. Fundamental knowledge is described according to a bond-graph representation. We show that bond-graph representation can be exploited in order to generate (completely or partially) the functional models. The different models of the multi-modeling approach allows to define the functional state of a system at different abstraction levels. We exploit this property to exonerate sub-systems for which the expected behavior is observed. The behavioral and functional descriptions of the remaining sub-systems are exploited hierarchically in a two steps process. In a first step, the abnormal behaviors explaining some observations are identified. In a second step, the remaining unexplained observations are used to generate conflict sets and thus the consistency based diagnoses. The modeling method and the diagnosis process have been applied to a Reactor Coolant Pump Sets. This application illustrates the concepts described in this thesis and shows its potentialities. (authors)

  15. A meteo-hydrological prediction system based on a multi-model approach for precipitation forecasting

    Directory of Open Access Journals (Sweden)

    S. Davolio

    2008-02-01

    Full Text Available The precipitation forecasted by a numerical weather prediction model, even at high resolution, suffers from errors which can be considerable at the scales of interest for hydrological purposes. In the present study, a fraction of the uncertainty related to meteorological prediction is taken into account by implementing a multi-model forecasting approach, aimed at providing multiple precipitation scenarios driving the same hydrological model. Therefore, the estimation of that uncertainty associated with the quantitative precipitation forecast (QPF, conveyed by the multi-model ensemble, can be exploited by the hydrological model, propagating the error into the hydrological forecast.

    The proposed meteo-hydrological forecasting system is implemented and tested in a real-time configuration for several episodes of intense precipitation affecting the Reno river basin, a medium-sized basin located in northern Italy (Apennines. These episodes are associated with flood events of different intensity and are representative of different meteorological configurations responsible for severe weather affecting northern Apennines.

    The simulation results show that the coupled system is promising in the prediction of discharge peaks (both in terms of amount and timing for warning purposes. The ensemble hydrological forecasts provide a range of possible flood scenarios that proved to be useful for the support of civil protection authorities in their decision.

  16. Multi-model attribution of upper-ocean temperature changes using an isothermal approach

    Science.gov (United States)

    Weller, Evan; Min, Seung-Ki; Palmer, Matthew D.; Lee, Donghyun; Yim, Bo Young; Yeh, Sang-Wook

    2016-06-01

    Both air-sea heat exchanges and changes in ocean advection have contributed to observed upper-ocean warming most evident in the late-twentieth century. However, it is predominantly via changes in air-sea heat fluxes that human-induced climate forcings, such as increasing greenhouse gases, and other natural factors such as volcanic aerosols, have influenced global ocean heat content. The present study builds on previous work using two different indicators of upper-ocean temperature changes for the detection of both anthropogenic and natural external climate forcings. Using simulations from phase 5 of the Coupled Model Intercomparison Project, we compare mean temperatures above a fixed isotherm with the more widely adopted approach of using a fixed depth. We present the first multi-model ensemble detection and attribution analysis using the fixed isotherm approach to robustly detect both anthropogenic and natural external influences on upper-ocean temperatures. Although contributions from multidecadal natural variability cannot be fully removed, both the large multi-model ensemble size and properties of the isotherm analysis reduce internal variability of the ocean, resulting in better observation-model comparison of temperature changes since the 1950s. We further show that the high temporal resolution afforded by the isotherm analysis is required to detect natural external influences such as volcanic cooling events in the upper-ocean because the radiative effect of volcanic forcings is short-lived.

  17. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  19. Multi-Model Prediction for Demand Forecast in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo Lopez Farias

    2018-03-01

    Full Text Available This paper presents a multi-model predictor called Qualitative Multi-Model Predictor Plus (QMMP+ for demand forecast in water distribution networks. QMMP+ is based on the decomposition of the quantitative and qualitative information of the time-series. The quantitative component (i.e., the daily consumption prediction is forecasted and the pattern mode estimated using a Nearest Neighbor (NN classifier and a Calendar. The patterns are updated via a simple Moving Average scheme. The NN classifier and the Calendar are executed simultaneously every period and the most suited model for prediction is selected using a probabilistic approach. The proposed solution for water demand forecast is compared against Radial Basis Function Artificial Neural Networks (RBF-ANN, the statistical Autoregressive Integrated Moving Average (ARIMA, and Double Seasonal Holt-Winters (DSHW approaches, providing the best results when applied to real demand of the Barcelona Water Distribution Network. QMMP+ has demonstrated that the special modelling treatment of water consumption patterns improves the forecasting accuracy.

  20. Kinetic theory of beam-induced plasmas generalised to sophisticated atomic structures

    International Nuclear Information System (INIS)

    Peyraud-Cuenca, Nelly

    1987-01-01

    We present an analytic kinetic model available for all particle-beam-induced atomic plasmas, without any restriction on the distribution of electronic levels. The method is an iteration of the already known solution available only for the distribution of atomic levels as in the rare gases. We recall a universal atomic kinetic model which, independently of its applications to the study of efficient laser systems, might be a first step in the analytic investigation of molecular problems. Then, the iteration is systematically applied to all possible atomic structures whose number is increased by the non-local character of inelastic processes. We deduce a general analytic representation of the 'tail' of the electron distribution function as a ratio between non-local source terms and a combination of inelastic cross sections, from which we exhibit a physical interpretation and essential scaling laws. The theory is applied to sodium which is an important element in the research of efficient laser systems. (author)

  1. Engineering FKBP-Based Destabilizing Domains to Build Sophisticated Protein Regulation Systems.

    Directory of Open Access Journals (Sweden)

    Wenlin An

    Full Text Available Targeting protein stability with small molecules has emerged as an effective tool to control protein abundance in a fast, scalable and reversible manner. The technique involves tagging a protein of interest (POI with a destabilizing domain (DD specifically controlled by a small molecule. The successful construction of such fusion proteins may, however, be limited by functional interference of the DD epitope with electrostatic interactions required for full biological function of proteins. Another drawback of this approach is the remaining endogenous protein. Here, we combined the Cre-LoxP system with an advanced DD and generated a protein regulation system in which the loss of an endogenous protein, in our case the tumor suppressor PTEN, can be coupled directly with a conditionally fine-tunable DD-PTEN. This new system will consolidate and extend the use of DD-technology to control protein function precisely in living cells and animal models.

  2. On the relevance of sophisticated structural annotations for disulfide connectivity pattern prediction.

    Directory of Open Access Journals (Sweden)

    Julien Becker

    Full Text Available Disulfide bridges strongly constrain the native structure of many proteins and predicting their formation is therefore a key sub-problem of protein structure and function inference. Most recently proposed approaches for this prediction problem adopt the following pipeline: first they enrich the primary sequence with structural annotations, second they apply a binary classifier to each candidate pair of cysteines to predict disulfide bonding probabilities and finally, they use a maximum weight graph matching algorithm to derive the predicted disulfide connectivity pattern of a protein. In this paper, we adopt this three step pipeline and propose an extensive study of the relevance of various structural annotations and feature encodings. In particular, we consider five kinds of structural annotations, among which three are novel in the context of disulfide bridge prediction. So as to be usable by machine learning algorithms, these annotations must be encoded into features. For this purpose, we propose four different feature encodings based on local windows and on different kinds of histograms. The combination of structural annotations with these possible encodings leads to a large number of possible feature functions. In order to identify a minimal subset of relevant feature functions among those, we propose an efficient and interpretable feature function selection scheme, designed so as to avoid any form of overfitting. We apply this scheme on top of three supervised learning algorithms: k-nearest neighbors, support vector machines and extremely randomized trees. Our results indicate that the use of only the PSSM (position-specific scoring matrix together with the CSP (cysteine separation profile are sufficient to construct a high performance disulfide pattern predictor and that extremely randomized trees reach a disulfide pattern prediction accuracy of [Formula: see text] on the benchmark dataset SPX[Formula: see text], which corresponds to

  3. Radiative forcing and climate metrics for ozone precursor emissions: the impact of multi-model averaging

    Directory of Open Access Journals (Sweden)

    C. R. MacIntosh

    2015-04-01

    Full Text Available Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds and CO. When these ozone changes are used to calculate radiative forcing (RF (and climate metrics such as the global warming potential (GWP and global temperature-change potential (GTP there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia. We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3

  4. Close-range geophotogrammetric mapping of trench walls using multi-model stereo restitution software

    International Nuclear Information System (INIS)

    Coe, J.A.; Taylor, E.M.; Schilling, S.P.

    1991-01-01

    Methods for mapping geologic features exposed on trench walls have advanced from conventional gridding and sketch mapping to precise close-range photogrammetric mapping. In our study, two strips of small-format (60 x 60) stereo pairs, each containing 42 photos and covering approximately 60 m of nearly vertical trench wall (2-4 m high), were contact printed onto eight 205 x 255-mm transparent film sheets. Each strip was oriented in a Kern DSR15 analytical plotter using the bundle adjustment module of Multi-Model Stereo Restitution Software (MMSRS). We experimented with several systematic-control-point configurations to evaluate orientation accuracies as a function of the number and position of control points. We recommend establishing control-point columns (each containing 2-3 points) in every 5th photo to achieve the 7-mm Root Mean Square Error (RMSE) accuracy required by our trench-mapping project. 7 refs., 8 figs., 1 tab

  5. Close-range geophotogrammetric mapping of trench walls using multi-model stereo restitution software

    Energy Technology Data Exchange (ETDEWEB)

    Coe, J.A.; Taylor, E.M.; Schilling, S.P.

    1991-06-01

    Methods for mapping geologic features exposed on trench walls have advanced from conventional gridding and sketch mapping to precise close-range photogrammetric mapping. In our study, two strips of small-format (60 {times} 60) stereo pairs, each containing 42 photos and covering approximately 60 m of nearly vertical trench wall (2-4 m high), were contact printed onto eight 205 {times} 255-mm transparent film sheets. Each strip was oriented in a Kern DSR15 analytical plotter using the bundle adjustment module of Multi-Model Stereo Restitution Software (MMSRS). We experimented with several systematic-control-point configurations to evaluate orientation accuracies as a function of the number and position of control points. We recommend establishing control-point columns (each containing 2-3 points) in every 5th photo to achieve the 7-mm Root Mean Square Error (RMSE) accuracy required by our trench-mapping project. 7 refs., 8 figs., 1 tab.

  6. Spatially distinct response of rice yield to autonomous adaptation under the CMIP5 multi-model projections

    Science.gov (United States)

    Shin, Yonghee; Lee, Eun-Jeong; Im, Eun-Soon; Jung, Il-Won

    2017-02-01

    Rice ( Oryza sativa L.) is a very important staple crop, as it feeds more than half of the world's population. Numerous studies have focused on the negative impacts of climate change on rice production. However, there is little debate on which region of the world is more vulnerable to climate change and how adaptation to this change can mitigate the negative impacts on rice production. We investigated the impacts of climate change on rice yield, based on simulations combining a global crop model, M-GAZE, and Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model projections. Our focus was the impact of mitigating emission forcings (representative concentration pathway RCP 4.5 vs. RCP 8.5) and autonomous adaptation (i.e., changing crop variety and planting date) on rice yield. In general, our results showed that climate change due to anthropogenic warming leads to a significant reduction in rice yield. However, autonomous adaptation provides the potential to reduce the negative impact of global warming on rice yields in a spatially distinct manner. The adaptation was less beneficial for countries located at a low latitude (e.g., Cambodia, Thailand, Brazil) compared to mid-latitude countries (e.g., USA, China, Pakistan), as regional climates at the lower latitudes are already near the upper temperature thresholds for acceptable rice growth. These findings suggest that the socioeconomic effects from rice production in lowlatitude countries can be highly vulnerable to anthropogenic global warming. Therefore, these countries need to be accountable to develop transformative adaptation strategies, such as adopting (or developing) heat-tolerant varieties, and/or improve irrigation systems and fertilizer use efficiency.

  7. Pauci ex tanto numero: reduce redundancy in multi-model ensembles

    Science.gov (United States)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-08-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.

  8. Pauci ex tanto numero: reducing redundancy in multi-model ensembles

    Science.gov (United States)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-02-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.

  9. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  10. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles reconstruct?

    Science.gov (United States)

    Saito, K.; Sueyoshi, T.; Marchenko, S.; Romanovsky, V.; Otto-Bliesner, B.; Walsh, J.; Bigelow, N.; Hendricks, A.; Yoshikawa, K.

    2013-03-01

    Global-scale frozen ground distribution during the Last Glacial Maximum (LGM) was reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project Phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present-day (pre-industrial; 0 k) and the LGM (21 k). This direct method was then compared to the earlier indirect method, which categorizes the underlying frozen ground type from surface air temperature, applied to both the PMIP2 (phase II) and PMIP3 products. Both direct and indirect diagnoses for 0 k showed strong agreement with the present-day observation-based map, although the soil temperature ensemble showed a higher diversity among the models partly due to varying complexity of the implemented subsurface processes. The area of continuous permafrost estimated by the multi-model analysis was 25.6 million km2 for LGM, in contrast to 12.7 million km2 for the pre-industrial control, whereas seasonally, frozen ground increased from 22.5 million km2 to 32.6 million km2. These changes in area resulted mainly from a cooler climate at LGM, but other factors as well, such as the presence of huge land ice sheets and the consequent expansion of total land area due to sea-level change. LGM permafrost boundaries modeled by the PMIP3 ensemble-improved over those of the PMIP2 due to higher spatial resolutions and improved climatology-also compared better to previous knowledge derived from the geomorphological and geocryological evidences. Combinatorial applications of coupled climate models and detailed stand-alone physical-ecological models for the cold-region terrestrial, paleo-, and modern climates will advance our understanding of the functionality and variability of the frozen ground subsystem in the global eco-climate system.

  11. Projected 21st century decrease in marine productivity: a multi-model analysis

    Directory of Open Access Journals (Sweden)

    M. Steinacher

    2010-03-01

    Full Text Available Changes in marine net primary productivity (PP and export of particulate organic carbon (EP are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.

  12. Learning Strategic Sophistication

    NARCIS (Netherlands)

    Blume, A.; DeJong, D.V.; Maier, M.

    2005-01-01

    We experimentally investigate coordination games in which cognition plays an important role, i.e. where outcomes are affected by the agents level of understanding of the game and the beliefs they form about each others understanding.We ask whether and when repeated exposure permits agents to learn

  13. A Multi-Model Study of Energy Supply Investments in Latin America under Climate Control Policy Energy Economics

    NARCIS (Netherlands)

    Kober, T.; Falzon, J.; van der Zwaan, B.; Calvin, K.; Kanudia, A.; Kitous, A.; Labriet, M.

    In this paper we investigate energy supply investment requirements in Latin America until 2050 through a multi-model approach as jointly applied in the CLIMACAP-LAMP research project. We compare a business-as-usual scenario needed to satisfy anticipated future energy demand with a set of scenarios

  14. EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010

    NARCIS (Netherlands)

    Colette, A.; Andersson, C.; Manders, A.; Mar, K.; Mircea, M.; Pay, M.T.; Raffort, V.; Tsyro, S.; Cuvelier, C.; Adani, M.; Bessagnet, B.; Bergström, R.; Briganti, G.; Butler, T.; Cappelletti, A.; Couvidat, F.; Isidoro, M. d; Doumbia, T.; Fagerli, H.; Granier, C.; Heyes, C.; Klimont, Z.; Ojha, N.; Otero, N.; Schaap, M.; Sindelarova, K.; Stegehuis, A.I.; Roustan, Y.; Vautard, R.; Meijgaard, E. van; Garcia Vivanco, M.; Wind, P.

    2017-01-01

    The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions

  15. A multi-model ensemble of downscaled spatial climate change scenarios for the Dommel catchment, Western Europe

    NARCIS (Netherlands)

    Vliet, M.T.H. van; Blenkinsop, S.; Burton, A.; Harpham, C.; Broers, H.P.; Fowler, H.J.

    2012-01-01

    Regional or local scale hydrological impact studies require high resolution climate change scenarios which should incorporate some assessment of uncertainties in future climate projections. This paper describes a method used to produce a multi-model ensemble of multivariate weather simulations

  16. Grand European and Asian-Pacific multi-model seasonal forecasts: maximization of skill and of potential economical value to end-users

    Science.gov (United States)

    Alessandri, Andrea; Felice, Matteo De; Catalano, Franco; Lee, June-Yi; Wang, Bin; Lee, Doo Young; Yoo, Jin-Ho; Weisheimer, Antije

    2018-04-01

    Multi-model ensembles (MMEs) are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles. Previous works suggested that the potential benefit that can be expected by using a MME amplifies with the increase of the independence of the contributing Seasonal Prediction Systems. In this work we combine the two MME Seasonal Prediction Systems (SPSs) independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities. To this aim, all the possible multi-model combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation compared to previous estimates from the contributing MMEs. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The number and selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models. To verify the above results for a real world application, the Grand ENSEMBLES-APCC/CliPAS MME is used to predict retrospective energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant enhancement of the potential economic value of forecasting energy demand when using the

  17. Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508

    International Nuclear Information System (INIS)

    Hayek, A; Al Bokhaiti, M; Schwarz, M H; Boercsoek, J

    2012-01-01

    With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.

  18. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles perform reconstruction?

    Directory of Open Access Journals (Sweden)

    K. Saito

    2013-08-01

    Full Text Available Here, global-scale frozen ground distribution from the Last Glacial Maximum (LGM has been reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project phase III (PMIP3 simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present day (pre-industrial; 0 kya and the LGM (21 kya. This direct method was then compared to an earlier indirect method, which categorizes underlying frozen ground type from surface air temperature, applying to both the PMIP2 (phase II and PMIP3 products. Both direct and indirect diagnoses for 0 kya showed strong agreement with the present-day observation-based map. The soil temperature ensemble showed a higher diversity around the border between permafrost and seasonally frozen ground among the models, partly due to varying subsurface processes, implementation, and settings. The area of continuous permafrost estimated by the PMIP3 multi-model analysis through the direct (indirect method was 26.0 (17.7 million km2 for LGM, in contrast to 15.1 (11.2 million km2 for the pre-industrial control, whereas seasonally frozen ground decreased from 34.5 (26.6 million km2 to 18.1 (16.0 million km2. These changes in area resulted mainly from a cooler climate at LGM, but from other factors as well, such as the presence of huge land ice sheets and the consequent expansion of total land area due to sea-level change. LGM permafrost boundaries modeled by the PMIP3 ensemble – improved over those of the PMIP2 due to higher spatial resolutions and improved climatology – also compared better to previous knowledge derived from geomorphological and geocryological evidence. Combinatorial applications of coupled climate models and detailed stand-alone physical-ecological models for the cold-region terrestrial

  19. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles perform reconstruction?

    Science.gov (United States)

    Saito, K.; Sueyoshi, T.; Marchenko, S.; Romanovsky, V.; Otto-Bliesner, B.; Walsh, J.; Bigelow, N.; Hendricks, A.; Yoshikawa, K.

    2013-08-01

    Here, global-scale frozen ground distribution from the Last Glacial Maximum (LGM) has been reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present day (pre-industrial; 0 kya) and the LGM (21 kya). This direct method was then compared to an earlier indirect method, which categorizes underlying frozen ground type from surface air temperature, applying to both the PMIP2 (phase II) and PMIP3 products. Both direct and indirect diagnoses for 0 kya showed strong agreement with the present-day observation-based map. The soil temperature ensemble showed a higher diversity around the border between permafrost and seasonally frozen ground among the models, partly due to varying subsurface processes, implementation, and settings. The area of continuous permafrost estimated by the PMIP3 multi-model analysis through the direct (indirect) method was 26.0 (17.7) million km2 for LGM, in contrast to 15.1 (11.2) million km2 for the pre-industrial control, whereas seasonally frozen ground decreased from 34.5 (26.6) million km2 to 18.1 (16.0) million km2. These changes in area resulted mainly from a cooler climate at LGM, but from other factors as well, such as the presence of huge land ice sheets and the consequent expansion of total land area due to sea-level change. LGM permafrost boundaries modeled by the PMIP3 ensemble - improved over those of the PMIP2 due to higher spatial resolutions and improved climatology - also compared better to previous knowledge derived from geomorphological and geocryological evidence. Combinatorial applications of coupled climate models and detailed stand-alone physical-ecological models for the cold-region terrestrial, paleo-, and modern

  20. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  1. Multi-model comparison of CO2 emissions peaking in China: Lessons from CEMF01 study

    Directory of Open Access Journals (Sweden)

    Oleg Lugovoy

    2018-03-01

    Full Text Available The paper summarizes results of the China Energy Modeling Forum's (CEMF first study. Carbon emissions peaking scenarios, consistent with China's Paris commitment, have been simulated with seven national and industry-level energy models and compared. The CO2 emission trends in the considered scenarios peak from 2015 to 2030 at the level of 9–11 Gt. Sector-level analysis suggests that total emissions pathways before 2030 will be determined mainly by dynamics of emissions in the electric power industry and transportation sector. Both sectors will experience significant increase in demand, but have low-carbon alternative options for development. Based on a side-by-side comparison of modeling input and results, conclusions have been drawn regarding the sources of emissions projections differences, which include data, views on economic perspectives, or models' structure and theoretical framework. Some suggestions have been made regarding energy models' development priorities for further research. Keywords: Carbon emissions projections, Climate change, CO2 emissions peak, China's Paris commitment, Top-Down energy models, Bottom-Up energy models, Multi model comparative study, China Energy Modeling Forum (CEMF

  2. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  3. A brief introduction to mixed effects modelling and multi-model inference in ecology.

    Science.gov (United States)

    Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard

    2018-01-01

    The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.

  4. Calibrating a multi-model approach to defect production in high energy collision cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.; Singh, B.N.; Diaz de la Rubia, T.

    1994-01-01

    A multi-model approach to simulating defect production processes at the atomic scale is described that incorporates molecular dynamics (MD), binary collision approximation (BCA) calculations and stochastic annealing simulations. The central hypothesis is that the simple, fast computer codes capable of simulating large numbers of high energy cascades (e.g., BCA codes) can be made to yield the correct defect configurations when their parameters are calibrated using the results of the more physically realistic MD simulations. The calibration procedure is investigated using results of MD simulations of 25 keV cascades in copper. The configurations of point defects are extracted from the MD cascade simulations at the end of the collisional phase, thus providing information similar to that obtained with a binary collision model. The MD collisional phase defect configurations are used as input to the ALSOME annealing simulation code, and values of the ALSOME quenching parameters are determined that yield the best fit to the post-quenching defect configurations of the MD simulations. ((orig.))

  5. A multi-model assessment of regional climate disparities caused by solar geoengineering

    International Nuclear Information System (INIS)

    Kravitz, Ben; Rasch, Philip J; Singh, Balwinder; Yoon, Jin-Ho; MacMartin, Douglas G; Robock, Alan; Ricke, Katharine L; Cole, Jason N S; Curry, Charles L; Irvine, Peter J; Ji, Duoying; Moore, John C; Keith, David W; Egill Kristjánsson, Jón; Muri, Helene; Tilmes, Simone; Watanabe, Shingo; Yang, Shuting

    2014-01-01

    Global-scale solar geoengineering is the deliberate modification of the climate system to offset some amount of anthropogenic climate change by reducing the amount of incident solar radiation at the surface. These changes to the planetary energy budget result in differential regional climate effects. For the first time, we quantitatively evaluate the potential for regional disparities in a multi-model context using results from a model experiment that offsets the forcing from a quadrupling of CO 2 via reduction in solar irradiance. We evaluate temperature and precipitation changes in 22 geographic regions spanning most of Earth's continental area. Moderate amounts of solar reduction (up to 85% of the amount that returns global mean temperatures to preindustrial levels) result in regional temperature values that are closer to preindustrial levels than an un-geoengineered, high CO 2 world for all regions and all models. However, in all but one model, there is at least one region for which no amount of solar reduction can restore precipitation toward its preindustrial value. For most metrics considering simultaneous changes in both variables, temperature and precipitation values in all regions are closer to the preindustrial climate for a moderate amount of solar reduction than for no solar reduction. (letter)

  6. A domain-decomposed multi-model plasma simulation of collisionless magnetic reconnection

    Science.gov (United States)

    Datta, I. A. M.; Shumlak, U.; Ho, A.; Miller, S. T.

    2017-10-01

    Collisionless magnetic reconnection is a process relevant to many areas of plasma physics in which energy stored in magnetic fields within highly conductive plasmas is rapidly converted into kinetic and thermal energy. Both in natural phenomena such as solar flares and terrestrial aurora as well as in magnetic confinement fusion experiments, the reconnection process is observed on timescales much shorter than those predicted by a resistive MHD model. As a result, this topic is an active area of research in which plasma models with varying fidelity have been tested in order to understand the proper physics explaining the reconnection process. In this research, a hybrid multi-model simulation employing the Hall-MHD and two-fluid plasma models on a decomposed domain is used to study this problem. The simulation is set up using the WARPXM code developed at the University of Washington, which uses a discontinuous Galerkin Runge-Kutta finite element algorithm and implements boundary conditions between models in the domain to couple their variable sets. The goal of the current work is to determine the parameter regimes most appropriate for each model to maintain sufficient physical fidelity over the whole domain while minimizing computational expense. This work is supported by a Grant from US AFOSR.

  7. An application of a multi model approach for solar energy prediction in Southern Italy

    Science.gov (United States)

    Avolio, Elenio; Lo Feudo, Teresa; Calidonna, Claudia Roberta; Contini, Daniele; Torcasio, Rosa Claudia; Tiriolo, Luca; Montesanti, Stefania; Transerici, Claudio; Federico, Stefano

    2015-04-01

    The accuracy of the short and medium range forecast of solar irradiance is very important for solar energy integration into the grid. This issue is particularly important for Southern Italy where a significant availability of solar energy is associated with a poor development of the grid. In this work we analyse the performance of two deterministic models for the prediction of surface temperature and short-wavelength radiance for two sites in southern Italy. Both parameters are needed to forecast the power production from solar power plants, so the performance of the forecast for these meteorological parameters is of paramount importance. The models considered in this work are the RAMS (Regional Atmospheric Modeling System) and the WRF (Weather Research and Forecasting Model) and they were run for the summer 2013 at 4 km horizontal resolution over Italy. The forecast lasts three days. Initial and dynamic boundary conditions are given by the 12 UTC deterministic forecast of the ECMWF-IFS (European Centre for Medium Weather Range Forecast - Integrated Forecasting System) model, and were available every 6 hours. Verification is given against two surface stations located in Southern Italy, Lamezia Terme and Lecce, and are based on hourly output of models forecast. Results for the whole period for temperature show a positive bias for the RAMS model and a negative bias for the WRF model. RMSE is between 1 and 2 °C for both models. Results for the whole period for the short-wavelength radiance show a positive bias for both models (about 30 W/m2 for both models) and a RMSE of 100 W/m2. To reduce the model errors, a statistical post-processing technique, i.e the multi-model, is adopted. In this approach the two model's outputs are weighted with an adequate set of weights computed for a training period. In general, the performance is improved by the application of the technique, and the RMSE is reduced by a sizeable fraction (i.e. larger than 10% of the initial RMSE

  8. Multi model and data analysis of terrestrial carbon cycle in Asia: From 2001 to 2006

    Science.gov (United States)

    Ichii, K.; Takahashi, K.; Suzuki, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.

    2009-12-01

    Accurate monitoring and modeling of the current status and their causes of interannual variations in terrestrial carbon cycle are important. Recently, many studies analyze using multiple methods (e.g. satellite data and ecosystem models) to clarify the underlain mechanisms and recent trend since each single methodology contains its own biases. The multi-model and data ensemble approach is a powerful method to clarify the current status and their underlain mechanisms. So far, many studies using multiple sources of data and models are conducted in North America, Europe, Africa, Amazon, and Japan, however, studies in monsoon Asia are lacking. In this study, we analyzed interannual variations in terrestrial carbon cycles in monsoon Asia, and evaluated current capability of remote sensing and ecosystem model to capture them based on multiple model and data sources; flux observations, remote sensing (e.g. MODIS, AVHRR, and VGT), and ecosystem models (e.g. SVM, BEAMS, CASA, Biome-BGC, LPJ, and TRIFFID). The satellite observation and ecosystem models show clear characteristics in interannual variabilities in satellite-based NDVI and model-based GPP. These are characterized by (1) spring NDVI and modeled GPP anomalies related to temperature anomaly in mid and high latitudinal areas (positive anomalies in 2002 and 2005 and negative one in 2006), (2) NDVI and GPP anomalies in southeastern and central Asia related to precipitation (e.g. India from 2003-2006), and (3) summer NDVI and GPP anomalies in 2003 related to strong anomalies in solar radiations. NDVI anomalies related to radiation ones (2003 summer) were not accurately captured by terrestrial ecosystem models. For example, LPJ model rather shows GPP positive anomalies in Far East Asia regions probably caused by positive precipitation anomalies. Further analysis requires improvement of models to reproduce more consistent spatial patterns in NDVI anomaly, and longer term analysis (e.g. after 1982).

  9. Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system

    Science.gov (United States)

    Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho

    2015-04-01

    Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research

  10. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  11. Yersinia virulence factors - a sophisticated arsenal for combating host defences [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Steve Atkinson

    2016-06-01

    Full Text Available The human pathogens Yersinia pseudotuberculosis and Yersinia enterocolitica cause enterocolitis, while Yersinia pestis is responsible for pneumonic, bubonic, and septicaemic plague. All three share an infection strategy that relies on a virulence factor arsenal to enable them to enter, adhere to, and colonise the host while evading host defences to avoid untimely clearance. Their arsenal includes a number of adhesins that allow the invading pathogens to establish a foothold in the host and to adhere to specific tissues later during infection. When the host innate immune system has been activated, all three pathogens produce a structure analogous to a hypodermic needle. In conjunction with the translocon, which forms a pore in the host membrane, the channel that is formed enables the transfer of six ‘effector’ proteins into the host cell cytoplasm. These proteins mimic host cell proteins but are more efficient than their native counterparts at modifying the host cell cytoskeleton, triggering the host cell suicide response. Such a sophisticated arsenal ensures that yersiniae maintain the upper hand despite the best efforts of the host to counteract the infecting pathogen.

  12. A Snapshot of Serial Rape: An Investigation of Criminal Sophistication and Use of Force on Victim Injury and Severity of the Assault.

    Science.gov (United States)

    de Heer, Brooke

    2016-02-01

    Prior research on rapes reported to law enforcement has identified criminal sophistication and the use of force against the victim as possible unique identifiers to serial rape versus one-time rape. This study sought to contribute to the current literature on reported serial rape by investigating how the level of criminal sophistication of the rapist and use of force used were associated with two important outcomes of rape: victim injury and overall severity of the assault. In addition, it was evaluated whether rapist and victim ethnicity affected these relationships. A nation-wide sample of serial rape cases reported to law enforcement collected by the Federal Bureau of Investigation (FBI) was analyzed (108 rapists, 543 victims). Results indicated that serial rapists typically used a limited amount of force against the victim and displayed a high degree of criminal sophistication. In addition, the more criminally sophisticated the perpetrator was, the more sexual acts he performed on his victim. Finally, rapes between a White rapist and White victim were found to exhibit higher levels of criminal sophistication and were more severe in terms of number and types of sexual acts committed. These findings provide a more in-depth understanding of serial rape that can inform both academics and practitioners in the field about contributors to victim injury and severity of the assault. © The Author(s) 2014.

  13. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    Science.gov (United States)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  14. Asia-MIP: Multi Model-data Synthesis of Terrestrial Carbon Cycles in Asia

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Ito, A.; Kang, M.; Sasai, T.; SATO, H.; Ueyama, M.; Kobayashi, H.; Saigusa, N.; Kim, J.

    2013-12-01

    Asia, which is characterized by monsoon climate and intense human activities, is one of the prominent understudied regions in terms of terrestrial carbon budgets and mechanisms of carbon exchange. To better understand terrestrial carbon cycle in Asia, we initiated multi-model and data intercomparison project in Asia (Asia-MIP). We analyzed outputs from multiple approaches: satellite-based observations (AVHRR and MODIS) and related products, empirically upscaled estimations (Support Vector Regression) using eddy-covariance observation network in Asia (AsiaFlux, CarboEastAsia, FLUXNET), ~10 terrestrial biosphere models (e.g. BEAMS, Biome-BGC, LPJ, SEIB-DGVM, TRIFFID, VISIT models), and atmospheric inversion analysis (e.g. TransCom models). We focused on the two difference temporal coverage: long-term (30 years; 1982-2011) and decadal (10 years; 2001-2010; data intensive period) scales. The regions of covering Siberia, Far East Asia, East Asia, Southeast Asia and South Asia (60-80E, 10S-80N), was analyzed in this study for assessing the magnitudes, interannual variability, and key driving factors of carbon cycles. We will report the progress of synthesis effort to quantify terrestrial carbon budget in Asia. First, we analyzed the recent trends in Gross Primary Productivities (GPP) using satellite-based observation (AVHRR) and multiple terrestrial biosphere models. We found both model outputs and satellite-based observation consistently show an increasing trend in GPP in most of the regions in Asia. Mechanisms of the GPP increase were analyzed using models, and changes in temperature and precipitation play dominant roles in GPP increase in boreal and temperate regions, whereas changes in atmospheric CO2 and precipitation are important in tropical regions. However, their relative contributions were different. Second, in the decadal analysis (2001-2010), we found that the negative GPP and carbon uptake anomalies in 2003 summer in Far East Asia is one of the largest

  15. Multi-model study of HTAP II on sulfur and nitrogen deposition

    Science.gov (United States)

    Tan, Jiani; Fu, Joshua S.; Dentener, Frank; Sun, Jian; Emmons, Louisa; Tilmes, Simone; Sudo, Kengo; Flemming, Johannes; Eiof Jonson, Jan; Gravel, Sylvie; Bian, Huisheng; Davila, Yanko; Henze, Daven K.; Lund, Marianne T.; Kucsera, Tom; Takemura, Toshihiko; Keating, Terry

    2018-05-01

    This study uses multi-model ensemble results of 11 models from the second phase of Task Force Hemispheric Transport of Air Pollution (HTAP II) to calculate the global sulfur (S) and nitrogen (N) deposition in 2010. Modeled wet deposition is evaluated with observation networks in North America, Europe and East Asia. The modeled results agree well with observations, with 76-83 % of stations being predicted within ±50 % of observations. The models underestimate SO42-, NO3- and NH4+ wet depositions in some European and East Asian stations but overestimate NO3- wet deposition in the eastern United States. Intercomparison with previous projects (PhotoComp, ACCMIP and HTAP I) shows that HTPA II has considerably improved the estimation of deposition at European and East Asian stations. Modeled dry deposition is generally higher than the inferential data calculated by observed concentration and modeled velocity in North America, but the inferential data have high uncertainty, too. The global S deposition is 84 Tg(S) in 2010, with 49 % in continental regions and 51 % in the ocean (19 % of which coastal). The global N deposition consists of 59 Tg(N) oxidized nitrogen (NOy) deposition and 64 Tg(N) reduced nitrogen (NHx) deposition in 2010. About 65 % of N is deposited in continental regions, and 35 % in the ocean (15 % of which coastal). The estimated outflow of pollution from land to ocean is about 4 Tg(S) for S deposition and 18 Tg(N) for N deposition. Comparing our results to the results in 2001 from HTAP I, we find that the global distributions of S and N deposition have changed considerably during the last 10 years. The global S deposition decreases 2 Tg(S) (3 %) from 2001 to 2010, with significant decreases in Europe (5 Tg(S) and 55 %), North America (3 Tg(S) and 29 %) and Russia (2 Tg(S) and 26 %), and increases in South Asia (2 Tg(S) and 42 %) and the Middle East (1 Tg(S) and 44 %). The global N deposition increases by 7 Tg(N) (6 %), mainly contributed by South Asia

  16. Climate change under aggressive mitigation: the ENSEMBLES multi-model experiment

    Energy Technology Data Exchange (ETDEWEB)

    Johns, T.C.; Hewitt, C.D. [Met Office, Hadley Centre, Exeter (United Kingdom); Royer, J.F.; Salas y. Melia, D. [Centre National de Recherches Meteorologiques-Groupe d' Etude de l' Atmosphere Meteorologique (CNRM-GAME Meteo-France CNRS), Toulouse (France); Hoeschel, I.; Koerper, J. [Freie Universitaet Berlin, Institute for Meteorology, Berlin (Germany); Huebener, H. [Hessian Agency for the Environment and Geology, Wiesbaden (Germany); Roeckner, E.; Giorgetta, M.A. [Max Planck Institute for Meteorology, Hamburg (Germany); Manzini, E. [Max Planck Institute for Meteorology, Hamburg (Germany); Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Bologna (Italy); May, W.; Yang, S. [Danish Meteorological Institute, Danish Climate Centre, Copenhagen (Denmark); Dufresne, J.L. [Laboratoire de Meteorologie Dynamique (LMD/IPSL), UMR 8539 CNRS, ENS, UPMC, Ecole Polytechnique, Paris Cedex 05 (France); Otteraa, O.H. [Nansen Environmental and Remote Sensing Center, Bergen (Norway); Bjerknes Centre for Climate Research, Bergen (Norway); Uni. Bjerknes Centre, Bergen (Norway); Vuuren, D.P. van [Utrecht University, Utrecht (Netherlands); Planbureau voor de Leefomgeving (PBL), Bilthoven (Netherlands); Denvil, S. [Institut Pierre Simon Laplace (IPSL), FR 636 CNRS, UVSQ, UPMC, Paris Cedex 05 (France); Fogli, P.G. [Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC), Bologna (Italy); Tjiputra, J.F. [University of Bergen, Department of Geophysics, Bergen (Norway); Bjerknes Centre for Climate Research, Bergen (Norway); Stehfest, E. [Planbureau voor de Leefomgeving (PBL), Bilthoven (Netherlands)

    2011-11-15

    to 1990, with further large reductions needed beyond that to achieve the E1 concentrations pathway. Negative allowable anthropogenic carbon emissions at and beyond 2100 cannot be ruled out for the E1 scenario. There is self-consistency between the multi-model ensemble of allowable anthropogenic carbon emissions and the E1 scenario emissions from IMAGE 2.4. (orig.)

  17. An application of ensemble/multi model approach for wind power production forecast.

    Science.gov (United States)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic

  18. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    Science.gov (United States)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a

  19. Multi-Model Assessment of Trends and Variability in Terrestrial Carbon Uptake in India

    Science.gov (United States)

    Rao, A. S.; Bala, G.; Ravindranath, N. H.

    2015-12-01

    Indian terrestrial ecosystem exhibits large temporal and spatial variability in carbon sources and sinks due to its monsoon based climate system, diverse land use and land cover distribution and cultural practices. In this study, a multi-model based assessment is made to study the trends and variability in the land carbon uptake for India over the 20th century. Data from nine models which are a part of a recent land surface model intercomparison project called TRENDY is used for the study. These models are driven with common forcing data over the period of 1901-2010. Model output variables assessed include: gross primary production (GPP), heterotrophic respiration (Rh), autotrophic respiration (Ra) and net primary production (NPP). The net ecosystem productivity (NEP) for the Indian region was calculated as a difference of NPP and Rh and it was found that NEP for the region indicates an estimated increase in uptake over the century by -0.6 TgC/year per year. NPP for India also shows an increasing trend of 2.03% per decade from 1901-2010. Seasonal variation in the multimodel mean NPP is maximum during the southwest monsoon period (JJA) followed by the post monsoon period (SON) and is attributed to the maximum in rainfall for the region during the months of JJA. To attribute the changes seen in the land carbon variables, influence of climatic drivers such as precipitation, temperature and remote influences of large scale phenomenon such as ENSO on the land carbon of the region are also estimated in the study. It is found that although changes in precipitation shows a good correlation to the changes seen in NEP, remote drivers like ENSO do not have much effect on them. The Net Ecosystem Exchange is calculated with the inclusion of the land use change flux and fire flux from the models. NEE suggests that the region behaves as a small sink for carbon with an net uptake of 5 GtC over the past hundred years.

  20. Multi-model ensemble simulations of low flows in Europe under a 1.5, 2, and 3 degree global warming

    Science.gov (United States)

    Marx, A.; Kumar, R.; Thober, S.; Zink, M.; Wanders, N.; Wood, E. F.; Pan, M.; Sheffield, J.; Samaniego, L. E.

    2017-12-01

    There is growing evidence that climate change will alter water availability in Europe. Here, we investigate how hydrological low flows are affected under different levels of future global warming (i.e., 1.5, 2 and 3 K). The analysis is based on a multi-model ensemble of 45 hydrological simulations based on three RCPs (rcp2p6, rcp6p0, rcp8p5), five CMIP5 GCMs (GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, NorESM1-M) and three state-of-the-art hydrological models (HMs: mHM, Noah-MP, and PCR-GLOBWB). High resolution model results are available at the unprecedented spatial resolution of 5 km across the pan-European domain at daily temporal resolution. Low river flow is described as the percentile of daily streamflow that is exceeded 90% of the time. It is determined separately for each GCM/HM combinations and the warming scenarios. The results show that the change signal amplifies with increasing warming levels. Low flows decrease in the Mediterranean, while they increase in the Alpine and Northern regions. In the Mediterranean, the level of warming amplifies the signal from -12% under 1.5 K to -35% under 3 K global warming largely due to the projected decreases in annual precipitation. In contrast, the signal is amplified from +22% (1.5 K) to +45% (3 K) because of the reduced snow melt contribution. The changes in low flows are significant for regions with relatively large change signals and under higher levels of warming. Nevertheless, it is not possible to distinguish climate induced differences in low flows between 1.5 and 2 K warming because of the large variability inherent in the multi-model ensemble. The contribution by the GCMs to the uncertainty in the Alpine and Northern region as well as the Mediterranean, the uncertainty contribution by the HMs is partly higher than those by the GCMs due to different representations of processes such as snow, soil moisture and evapotranspiration.

  1. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry

    Science.gov (United States)

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-01

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.

  2. Sophisticated Epistemologies of Physics versus High-Stakes Tests: How Do Elite High School Students Respond to Competing Influences about How to Learn Physics?

    Science.gov (United States)

    Yerdelen-Damar, Sevda; Elby, Andrew

    2016-01-01

    This study investigates how elite Turkish high school physics students claim to approach learning physics when they are simultaneously (i) engaged in a curriculum that led to significant gains in their epistemological sophistication and (ii) subject to a high-stakes college entrance exam. Students reported taking surface (rote) approaches to…

  3. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  4. A short-range weather prediction system for South Africa based on a multi-model approach

    CSIR Research Space (South Africa)

    Landman, S

    2012-10-01

    Full Text Available stream_source_info Landman5_2012.pdf.txt stream_content_type text/plain stream_size 44898 Content-Encoding ISO-8859-1 stream_name Landman5_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 1 A short... to be skillful. Moreover, the system outscores the forecast skill of the individual models. Keywords: short-range, ensemble, forecasting, precipitation, multi-model, verification Tel: +27 12 367 6054...

  5. A diagnosis method for physical systems using a multi-modeling approach; Utilisation de l'approche multi-modeles pour l'aide au diagnostic d'installations industrielles

    Energy Technology Data Exchange (ETDEWEB)

    Thetiot, R

    2000-07-01

    In this thesis we propose a method for diagnosis problem solving. This method is based on a multi-modeling approach describing both normal and abnormal behavior of a system. This modeling approach allows to represent a system at different abstraction levels (behavioral, functional and teleological). Fundamental knowledge is described according to a bond-graph representation. We show that bond-graph representation can be exploited in order to generate (completely or partially) the functional models. The different models of the multi-modeling approach allows to define the functional state of a system at different abstraction levels. We exploit this property to exonerate sub-systems for which the expected behavior is observed. The behavioral and functional descriptions of the remaining sub-systems are exploited hierarchically in a two steps process. In a first step, the abnormal behaviors explaining some observations are identified. In a second step, the remaining unexplained observations are used to generate conflict sets and thus the consistency based diagnoses. The modeling method and the diagnosis process have been applied to a Reactor Coolant Pump Sets. This application illustrates the concepts described in this thesis and shows its potentialities. (authors)

  6. A diagnosis method for physical systems using a multi-modeling approach; Utilisation de l'approche multi-modeles pour l'aide au diagnostic d'installations industrielles

    Energy Technology Data Exchange (ETDEWEB)

    Thetiot, R

    2000-07-01

    In this thesis we propose a method for diagnosis problem solving. This method is based on a multi-modeling approach describing both normal and abnormal behavior of a system. This modeling approach allows to represent a system at different abstraction levels (behavioral, functional and teleological). Fundamental knowledge is described according to a bond-graph representation. We show that bond-graph representation can be exploited in order to generate (completely or partially) the functional models. The different models of the multi-modeling approach allows to define the functional state of a system at different abstraction levels. We exploit this property to exonerate sub-systems for which the expected behavior is observed. The behavioral and functional descriptions of the remaining sub-systems are exploited hierarchically in a two steps process. In a first step, the abnormal behaviors explaining some observations are identified. In a second step, the remaining unexplained observations are used to generate conflict sets and thus the consistency based diagnoses. The modeling method and the diagnosis process have been applied to a Reactor Coolant Pump Sets. This application illustrates the concepts described in this thesis and shows its potentialities. (authors)

  7. A case for multi-model and multi-approach based event attribution: The 2015 European drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Seneviratne, Sonia Isabelle

    2017-04-01

    Science on the role of anthropogenic influence on extreme weather events such as heat waves or droughts has evolved rapidly over the past years. The approach of "event attribution" compares the occurrence probability of an event in the present, factual world with the probability of the same event in a hypothetical, counterfactual world without human-induced climate change. Every such analysis necessarily faces multiple methodological choices including, but not limited to: the event definition, climate model configuration, and the design of the counterfactual world. Here, we explore the role of such choices for an attribution analysis of the 2015 European summer drought (Hauser et al., in preparation). While some GCMs suggest that anthropogenic forcing made the 2015 drought more likely, others suggest no impact, or even a decrease in the event probability. These results additionally differ for single GCMs, depending on the reference used for the counterfactual world. Observational results do not suggest a historical tendency towards more drying, but the record may be too short to provide robust assessments because of the large interannual variability of drought occurrence. These results highlight the need for a multi-model and multi-approach framework in event attribution research. This is especially important for events with low signal to noise ratio and high model dependency such as regional droughts. Hauser, M., L. Gudmundsson, R. Orth, A. Jézéquel, K. Haustein, S.I. Seneviratne, in preparation. A case for multi-model and multi-approach based event attribution: The 2015 European drought.

  8. A method for state-of-charge estimation of Li-ion batteries based on multi-model switching strategy

    International Nuclear Information System (INIS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    Highlights: • Build a multi-model switching SOC estimate method for Li-ion batteries. • Build an improved interpretative structural modeling method for model switching. • The feedback strategy of bus delay is applied to improve the real-time performance. • The EKF method is used for SOC estimation to improve the estimated accuracy. - Abstract: The accurate state-of-charge (SOC) estimation and real-time performance are critical evaluation indexes for Li-ion battery management systems (BMS). High accuracy algorithms often take long program execution time (PET) in the resource-constrained embedded application systems, which will undoubtedly lead to the decrease of the time slots of other processes, thereby reduce the overall performance of BMS. Considering the resource optimization and the computational load balance, this paper proposes a multi-model switching SOC estimation method for Li-ion batteries. Four typical battery models are employed to build a close-loop SOC estimation system. The extended Kalman filter (EKF) method is employed to eliminate the effect of the current noise and improve the accuracy of SOC. The experiments under dynamic current conditions are conducted to verify the accuracy and real-time performance of the proposed method. The experimental results indicate that accurate estimation results and reasonable PET can be obtained by the proposed method

  9. Factors affecting infestation by Triatoma infestans in a rural area of the humid Chaco in Argentina: a multi-model inference approach.

    Science.gov (United States)

    Gurevitz, Juan M; Ceballos, Leonardo A; Gaspe, María Sol; Alvarado-Otegui, Julián A; Enríquez, Gustavo F; Kitron, Uriel; Gürtler, Ricardo E

    2011-10-01

    Transmission of Trypanosoma cruzi by Triatoma infestans remains a major public health problem in the Gran Chaco ecoregion, where understanding of the determinants of house infestation is limited. We conducted a cross-sectional study to model factors affecting bug presence and abundance at sites within house compounds in a well-defined rural area in the humid Argentine Chaco. Triatoma infestans bugs were found in 45.9% of 327 inhabited house compounds but only in 7.4% of the 2,584 sites inspected systematically on these compounds, even though the last insecticide spraying campaign was conducted 12 years before. Infested sites were significantly aggregated at distances of 0.8-2.5 km. The most frequently infested ecotopes were domiciles, kitchens, storerooms, chicken coops and nests; corrals were rarely infested. Domiciles with mud walls and roofs of thatch or corrugated tarred cardboard were more often infested (32.2%) than domiciles with brick-and-cement walls and corrugated metal-sheet roofs (15.1%). A multi-model inference approach using Akaike's information criterion was applied to assess the relative importance of each variable by running all possible (17,406) models resulting from all combinations of variables. Availability of refuges for bugs, construction with tarred cardboard, and host abundance (humans, dogs, cats, and poultry) per site were positively associated with infestation and abundance, whereas reported insecticide use showed a negative association. Ethnic background (Creole or Toba) adjusted for other factors showed little or no association. Promotion and effective implementation of housing improvement (including key peridomestic structures) combined with appropriate insecticide use and host management practices are needed to eliminate infestations. Fewer refuges are likely to result in fewer residual foci after insecticide spraying, and will facilitate community-based vector surveillance. A more integrated perspective that considers simultaneously

  10. Grand European and Asian-Pacific multi-model seasonal forecasts: maximization of skill and of potential economical value to end-users

    Science.gov (United States)

    Alessandri, A.; De Felice, M.; Catalano, F.; Lee, J. Y.; Wang, B.; Lee, D. Y.; Yoo, J. H.; Weisheimer, A.

    2017-12-01

    By initiating a novel cooperation between the European and the Asian-Pacific climate-prediction communities, this work demonstrates the potential of gathering together their Multi-Model Ensembles (MMEs) to obtain useful climate predictions at seasonal time-scale.MMEs are powerful tools in dynamical climate prediction as they account for the overconfidence and the uncertainties related to single-model ensembles and increasing benefit is expected with the increase of the independence of the contributing Seasonal Prediction Systems (SPSs). In this work we combine the two MME SPSs independently developed by the European (ENSEMBLES) and by the Asian-Pacific (APCC/CliPAS) communities by establishing an unprecedented partnerships. To this aim, all the possible MME combinations obtained by putting together the 5 models from ENSEMBLES and the 11 models from APCC/CliPAS have been evaluated. The Grand ENSEMBLES-APCC/CliPAS MME enhances significantly the skill in predicting 2m temperature and precipitation. Our results show that, in general, the better combinations of SPSs are obtained by mixing ENSEMBLES and APCC/CliPAS models and that only a limited number of SPSs is required to obtain the maximum performance. The selection of models that perform better is usually different depending on the region/phenomenon under consideration so that all models are useful in some cases. It is shown that the incremental performance contribution tends to be higher when adding one model from ENSEMBLES to APCC/CliPAS MMEs and vice versa, confirming that the benefit of using MMEs amplifies with the increase of the independence the contributing models.To verify the above results for a real world application, the Grand MME is used to predict energy demand over Italy as provided by TERNA (Italian Transmission System Operator) for the period 1990-2007. The results demonstrate the useful application of MME seasonal predictions for energy demand forecasting over Italy. It is shown a significant

  11. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  12. Multi-modeling assessment of recent changes in groundwater resource: application to the semi-arid Haouz plain (Central Morocco)

    Science.gov (United States)

    Fakir, Younes; Brahim, Berjamy; Page Michel, Le; Fathallah, Sghrer; Houda, Nassah; Lionel, Jarlan; Raki Salah, Er; Vincent, Simonneaux; Said, Khabba

    2015-04-01

    The Haouz plain (6000 km2) is a part of the Tensift basin located in the Central Morocco. The plain has a semi-arid climate (250 mm/y of rainfall) and is bordered in the south by the High-Atlas mountains. Because the plain is highly anthropized, the water resources face heavy demands from various competing sectors, including agriculture (over than 273000 ha of irrigated areas), water supply for more than 2 million inhabitants and about 2 millions of tourists annually. Consequently the groundwater is being depleted on a large area of the plain, with problems of water scarcity which pose serious threats to water supplies and to sustainable development. The groundwater in the Haouz plain was modeled previously by MODFLOW (USGS groundwater numerical modeling) with annual time steps. In the present study a multi-modeling approach is applied. The aim is to enhance the evaluation of the groundwater pumping for irrigation, one of the most difficult data to estimate, and to improve the water balance assessment. In this purpose, two other models were added: SAMIR (Satellite Estimation of Agricultural Water Demand) and WEAP (integrated water resources planning). The three models are implemented at a monthly time step and calibrated over the 2001-2011 period, corresponding to 120 time steps. This multi-modeling allows assessing the evolution of water resources both in time and space. The results show deep changes during the last years which affect generally the water resources and groundwater particularly. These changes are induced by a remarkable urbanism development, succession of droughts, intensive agriculture activities and weak management of irrigation and water resources. Some indicators of these changes are as follow: (i) the groundwater table decrease varies between 1 to 3m/year, (ii) the groundwater depletion during the last ten year is equivalent to 50% of the lost reserves during 40 years, (iii) the annual groundwater deficit is about 100 hm3, (iv) the renewable

  13. A multi-model assessment of the co-benefits of climate mitigation for global air quality

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Shilpa; Klimont, Zbigniew; Leitao, Joana; Riahi, Keywan; van Dingenen, Rita; Reis, Lara Aleluia; Calvin, Katherine; Dentener, Frank; Drouet, Laurent; Fujimori, Shinichiro; Harmsen, Mathijs; Luderer, Gunnar; Heyes, Chris; Strefler, Jessica; Tavoni, Massimo; van Vuuren, Detlef P.

    2016-12-01

    The recent International Panel on Climate change (IPCC) report identifies significant co-benefits from climate policies on near-term ambient air pollution and related human health outcomes [1]. This is increasingly relevant for policy making as the health impacts of air pollution are a major global concern- the Global Burden of Disease (GBD) study identifies outdoor air pollution as the sixth major cause of death globally [2]. Integrated assessment models (IAMs) are an effective tool to evaluate future air pollution outcomes across a wide range of assumptions on socio-economic development and policy regimes. The Representative Concentration Pathways (RCPs) [3] were the first set of long-term global scenarios developed across multiple integrated assessment models that provided detailed estimates of a number of air pollutants until 2100. However these scenarios were primarily designed to cover a defined range of radiative forcing outcomes and thus did not specifically focus on the interactions of long-term climate goals on near-term air pollution impacts. More recently, [4] used the RCP4.5 scenario to evaluate the co-benefits of global GHG reductions on air quality and human health in 2030. [5-7] have further examined the interactions of more diverse pollution control regimes with climate policies. This paper extends the listed studies in a number of ways. Firstly it uses multiple IAMs to look into the co-benefits of a global climate policy for ambient air pollution under harmonized assumptions on near-term air pollution control. Multi-model frameworks have been extensively used in the analysis of climate change mitigation pathways, and the structural uncertainties regarding the underlying mechanisms (see for example [8-10]. This is to our knowledge the first time that a multi-model evaluation has been specifically designed and applied to analyze the co-benefits of climate change policy on ambient air quality, thus enabling a better understanding of at a detailed

  14. Effective Multi-Model Motion Tracking Under Multiple Team Member Actuators

    OpenAIRE

    Gu, Yang; Veloso, Manuela

    2009-01-01

    Motivated by the interactions between a team and the tracked target, we contribute a method to achieve efficient tracking through using a play-based motion model and combined vision and infrared sensory information. This method gives the robot a more exact task-specific motion model when executing different tactics over the tracked target (e.g. the ball) or collaborating with the tracked target (e.g. the team member). Then we represent the system in a compact dynamic Bayesian network and use ...

  15. Framework for Infectious Disease Analysis: A comprehensive and integrative multi-modeling approach to disease prediction and management.

    Science.gov (United States)

    Erraguntla, Madhav; Zapletal, Josef; Lawley, Mark

    2017-12-01

    The impact of infectious disease on human populations is a function of many factors including environmental conditions, vector dynamics, transmission mechanics, social and cultural behaviors, and public policy. A comprehensive framework for disease management must fully connect the complete disease lifecycle, including emergence from reservoir populations, zoonotic vector transmission, and impact on human societies. The Framework for Infectious Disease Analysis is a software environment and conceptual architecture for data integration, situational awareness, visualization, prediction, and intervention assessment. Framework for Infectious Disease Analysis automatically collects biosurveillance data using natural language processing, integrates structured and unstructured data from multiple sources, applies advanced machine learning, and uses multi-modeling for analyzing disease dynamics and testing interventions in complex, heterogeneous populations. In the illustrative case studies, natural language processing from social media, news feeds, and websites was used for information extraction, biosurveillance, and situation awareness. Classification machine learning algorithms (support vector machines, random forests, and boosting) were used for disease predictions.

  16. Comparing reconstructed past variations and future projections of the Baltic Sea ecosystem—first results from multi-model ensemble simulations

    International Nuclear Information System (INIS)

    Meier, H E Markus; Andersson, Helén C; Arheimer, Berit; Donnelly, Chantal; Eilola, Kari; Höglund, Anders; Kuznetsov, Ivan; Blenckner, Thorsten; Gustafsson, Bo G; Müller-Karulis, Bärbel; Niiranen, Susa; Chubarenko, Boris; Hansson, Anders; Havenhand, Jonathan; MacKenzie, Brian R; Neumann, Thomas; Piwowarczyk, Joanna; Raudsepp, Urmas; Reckermann, Marcus; Ruoho-Airola, Tuija

    2012-01-01

    Multi-model ensemble simulations for the marine biogeochemistry and food web of the Baltic Sea were performed for the period 1850–2098, and projected changes in the future climate were compared with the past climate environment. For the past period 1850–2006, atmospheric, hydrological and nutrient forcings were reconstructed, based on historical measurements. For the future period 1961–2098, scenario simulations were driven by regionalized global general circulation model (GCM) data and forced by various future greenhouse gas emission and air- and riverborne nutrient load scenarios (ranging from a pessimistic ‘business-as-usual’ to the most optimistic case). To estimate uncertainties, different models for the various parts of the Earth system were applied. Assuming the IPCC greenhouse gas emission scenarios A1B or A2, we found that water temperatures at the end of this century may be higher and salinities and oxygen concentrations may be lower than ever measured since 1850. There is also a tendency of increased eutrophication in the future, depending on the nutrient load scenario. Although cod biomass is mainly controlled by fishing mortality, climate change together with eutrophication may result in a biomass decline during the latter part of this century, even when combined with lower fishing pressure. Despite considerable shortcomings of state-of-the-art models, this study suggests that the future Baltic Sea ecosystem may unprecedentedly change compared to the past 150 yr. As stakeholders today pay only little attention to adaptation and mitigation strategies, more information is needed to raise public awareness of the possible impacts of climate change on marine ecosystems. (letter)

  17. Identifying and Assessing Gaps in Subseasonal to Seasonal Prediction Skill using the North American Multi-model Ensemble

    Science.gov (United States)

    Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.

    2016-12-01

    Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.

  18. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    Science.gov (United States)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  19. Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub

    Science.gov (United States)

    Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.

    2018-01-01

    A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.

  20. Multi-model Estimates of Intercontinental Source-Receptor Relationships for Ozone Pollution

    Energy Technology Data Exchange (ETDEWEB)

    Fiore, A M; Dentener, F J; Wild, O; Cuvelier, C; Schultz, M G; Hess, P; Textor, C; Schulz, M; Doherty, R; Horowitz, L W; MacKenzie, I A; Sanderson, M G; Shindell, D T; Stevenson, D S; Szopa, S; Van Dingenen, R; Zeng, G; Atherton, C; Bergmann, D; Bey, I; Carmichael, G; Collins, W J; Duncan, B N; Faluvegi, G; Folberth, G; Gauss, M; Gong, S; Hauglustaine, D; Holloway, T; Isaksen, I A; Jacob, D J; Jonson, J E; Kaminski, J W; Keating, T J; Lupu, A; Marmer, E; Montanaro, V; Park, R; Pitari, G; Pringle, K J; Pyle, J A; Schroeder, S; Vivanco, M G; Wind, P; Wojcik, G; Wu, S; Zuber, A

    2008-10-16

    Understanding the surface O{sub 3} response over a 'receptor' region to emission changes over a foreign 'source' region is key to evaluating the potential gains from an international approach to abate ozone (O{sub 3}) pollution. We apply an ensemble of 21 global and hemispheric chemical transport models to estimate the spatial average surface O{sub 3} response over East Asia (EA), Europe (EU), North America (NA) and South Asia (SA) to 20% decreases in anthropogenic emissions of the O{sub 3} precursors, NO{sub x}, NMVOC, and CO (individually and combined), from each of these regions. We find that the ensemble mean surface O{sub 3} concentrations in the base case (year 2001) simulation matches available observations throughout the year over EU but overestimates them by >10 ppb during summer and early fall over the eastern U.S. and Japan. The sum of the O{sub 3} responses to NO{sub x}, CO, and NMVOC decreases separately is approximately equal to that from a simultaneous reduction of all precursors. We define a continental-scale 'import sensitivity' as the ratio of the O{sub 3} response to the 20% reductions in foreign versus 'domestic' (i.e., over the source region itself) emissions. For example, the combined reduction of emissions from the 3 foreign regions produces an ensemble spatial mean decrease of 0.6 ppb over EU (0.4 ppb from NA), less than the 0.8 ppb from the reduction of EU emissions, leading to an import sensitivity ratio of 0.7. The ensemble mean surface O{sub 3} response to foreign emissions is largest in spring and late fall (0.7-0.9 ppb decrease in all regions from the combined precursor reductions in the 3 foreign regions), with import sensitivities ranging from 0.5 to 1.1 (responses to domestic emission reductions are 0.8-1.6 ppb). High O{sub 3} values are much more sensitive to domestic emissions than to foreign emissions, as indicated by lower import sensitivities of 0.2 to 0.3 during July in EA, EU, and NA

  1. A multi-model assessment of terrestrial biosphere model data needs

    Science.gov (United States)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  2. Optical Alignment of the Chromospheric Lyman-Alpha SpectroPolarimeter using Sophisticated Methods to Minimize Activities under Vacuum

    Science.gov (United States)

    Giono, G.; Katsukawa, Y.; Ishikawa, R.; Narukage, N.; Kano, R.; Kubo, M.; Ishikawa, S.; Bando, T.; Hara, H.; Suematsu, Y.; hide

    2016-01-01

    The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is a sounding-rocket instrument developed at the National Astronomical Observatory of Japan (NAOJ) as a part of an international collaboration. The in- strument main scientific goal is to achieve polarization measurement of the Lyman-alpha line at 121.56 nm emitted from the solar upper-chromosphere and transition region with an unprecedented 0.1% accuracy. For this purpose, the optics are composed of a Cassegrain telescope coated with a "cold mirror" coating optimized for UV reflection and a dual-channel spectrograph allowing for simultaneous observation of the two orthogonal states of polarization. Although the polarization sensitivity is the most important aspect of the instrument, the spatial and spectral resolutions of the instrument are also crucial to observe the chromospheric features and resolve the Ly- pro les. A precise alignment of the optics is required to ensure the resolutions, but experiments under vacuum conditions are needed since Ly-alpha is absorbed by air, making the alignment experiments difficult. To bypass this issue, we developed methods to align the telescope and the spectrograph separately in visible light. We will explain these methods and present the results for the optical alignment of the CLASP telescope and spectrograph. We will then discuss the combined performances of both parts to derive the expected resolutions of the instrument, and compare them with the flight observations performed on September 3rd 2015.

  3. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    Science.gov (United States)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  4. EU-Korea FTA and Its Impact on V4 Economies. A Comparative Analysis of Trade Sophistication and Intra-Industry Trade

    Directory of Open Access Journals (Sweden)

    Michalski Bartosz

    2018-03-01

    Full Text Available This paper investigates selected short- and mid-term effects in trade in goods between the Visegrad countries (V4: the Czech Republic, Hungary, Poland and the Slovak Republic and the Republic of Korea under the framework of the Free Trade Agreement between the European Union and the Republic of Korea. This Agreement is described in the “Trade for All” (2015: 9 strategy as the most ambitious trade deal ever implemented by the EU. The primary purpose of our analysis is to identify, compare, and evaluate the evolution of the technological sophistication of bilateral exports and imports. Another dimension of the paper concentrates on the developments within intra-industry trade. Moreover, these objectives are approached taking into account the context of the South Korean direct investment inflow to the V4. The evaluation of technological sophistication is based on UNCTAD’s methodology, while the intensity of intra-industry trade is measured by the GL-index and identification of its subcategories (horizontal and vertical trade. The analysis covers the timespan 2001–2015. The novelty of the paper lies in the fact that the study of South Korean-V4 trade relations has not so far been carried out from this perspective. Thus this paper investigates interesting phenomena identified in the trade between the Republic of Korea (ROK and V4 economies. The main findings imply an impact of South Korean direct investments on trade. This is represented by the trade deficit of the V4 with ROK and the structure of bilateral trade in terms of its technological sophistication. South Korean investments might also have had positive consequences for the evolution of IIT, particularly in the machinery sector. The political interpretation indicates that they may strengthen common threats associated with the middle-income trap, particularly the technological gap and the emphasis placed on lower costs of production.

  5. Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment

    Science.gov (United States)

    Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.

    2017-12-01

    We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.

  6. Impacts of Climate Change on Surface Ozone and Intercontinental Ozone Pollution: A Multi-Model Study

    Science.gov (United States)

    Doherty, R. M.; Wild, O.; Shindell, D. T.; Zeng, G.; MacKenzie, I. A.; Collins, W. J.; Fiore, A. M.; Stevenson, D. S.; Dentener, F. J.; Schultz, M. G.; hide

    2013-01-01

    The impact of climate change between 2000 and 2095 SRES A2 climates on surface ozone (O)3 and on O3 source-receptor (S-R) relationships is quantified using three coupled climate-chemistry models (CCMs). The CCMs exhibit considerable variability in the spatial extent and location of surface O3 increases that occur within parts of high NOx emission source regions (up to 6 ppbv in the annual average and up to 14 ppbv in the season of maximum O3). In these source regions, all three CCMs show a positive relationship between surface O3 change and temperature change. Sensitivity simulations show that a combination of three individual chemical processes-(i) enhanced PAN decomposition, (ii) higher water vapor concentrations, and (iii) enhanced isoprene emission-largely reproduces the global spatial pattern of annual-mean surface O3 response due to climate change (R2 = 0.52). Changes in climate are found to exert a stronger control on the annual-mean surface O3 response through changes in climate-sensitive O3 chemistry than through changes in transport as evaluated from idealized CO-like tracer concentrations. All three CCMs exhibit a similar spatial pattern of annual-mean surface O3 change to 20% regional O3 precursor emission reductions under future climate compared to the same emission reductions applied under present-day climate. The surface O3 response to emission reductions is larger over the source region and smaller downwind in the future than under present-day conditions. All three CCMs show areas within Europe where regional emission reductions larger than 20% are required to compensate climate change impacts on annual-mean surface O3.

  7. Pathways to Mexico’s climate change mitigation targets: A multi-model analysis

    International Nuclear Information System (INIS)

    Veysey, Jason; Octaviano, Claudia; Calvin, Katherine; Martinez, Sara Herreras; Kitous, Alban; McFarland, James; Zwaan, Bob van der

    2016-01-01

    Mexico’s climate policy sets ambitious national greenhouse gas (GHG) emission reduction targets—30% versus a business-as-usual baseline by 2020, 50% versus 2000 by 2050. However, these goals are at odds with recent energy and emission trends in the country. Both energy use and GHG emissions in Mexico have grown substantially over the last two decades. We investigate how Mexico might reverse current trends and reach its mitigation targets by exploring results from energy system and economic models involved in the CLIMACAP-LAMP project. To meet Mexico’s emission reduction targets, all modeling groups agree that decarbonization of electricity is needed, along with changes in the transport sector, either to more efficient vehicles or a combination of more efficient vehicles and lower carbon fuels. These measures reduce GHG emissions as well as emissions of other air pollutants. The models find different energy supply pathways, with some solutions based on renewable energy and others relying on biomass or fossil fuels with carbon capture and storage. The economy-wide costs of deep mitigation could range from 2% to 4% of GDP in 2030, and from 7% to 15% of GDP in 2050. Our results suggest that Mexico has some flexibility in designing deep mitigation strategies, and that technological options could allow Mexico to achieve its emission reduction targets, albeit at a cost to the country. - Highlights: • We explore paths to deep mitigation for Mexico (50% cut in GHG emissions by 2050). • We present results from six models and compare them with Mexican climate policy. • We find a range of potential paths and costs, implying options for policy makers. • An important commonality between the paths is a decarbonized electricity supply. • Estimated mitigation costs vary but are higher than official published estimates.

  8. Upper Blue Nile basin water budget from a multi-model perspective

    Science.gov (United States)

    Jung, Hahn Chul; Getirana, Augusto; Policelli, Frederick; McNally, Amy; Arsenault, Kristi R.; Kumar, Sujay; Tadesse, Tsegaye; Peters-Lidard, Christa D.

    2017-12-01

    Improved understanding of the water balance in the Blue Nile is of critical importance because of increasingly frequent hydroclimatic extremes under a changing climate. The intercomparison and evaluation of multiple land surface models (LSMs) associated with different meteorological forcing and precipitation datasets can offer a moderate range of water budget variable estimates. In this context, two LSMs, Noah version 3.3 (Noah3.3) and Catchment LSM version Fortuna 2.5 (CLSMF2.5) coupled with the Hydrological Modeling and Analysis Platform (HyMAP) river routing scheme are used to produce hydrological estimates over the region. The two LSMs were forced with different combinations of two reanalysis-based meteorological datasets from the Modern-Era Retrospective analysis for Research and Applications datasets (i.e., MERRA-Land and MERRA-2) and three observation-based precipitation datasets, generating a total of 16 experiments. Modeled evapotranspiration (ET), streamflow, and terrestrial water storage estimates were evaluated against the Atmosphere-Land Exchange Inverse (ALEXI) ET, in-situ streamflow observations, and NASA Gravity Recovery and Climate Experiment (GRACE) products, respectively. Results show that CLSMF2.5 provided better representation of the water budget variables than Noah3.3 in terms of Nash-Sutcliffe coefficient when considering all meteorological forcing datasets and precipitation datasets. The model experiments forced with observation-based products, the Climate Hazards group Infrared Precipitation with Stations (CHIRPS) and the Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA), outperform those run with MERRA-Land and MERRA-2 precipitation. The results presented in this paper would suggest that the Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System incorporate CLSMF2.5 and HyMAP routing scheme to better represent the water balance in this region.

  9. Rapid, Efficient and Versatile Strategies for Functionally Sophisticated Polymers and Nanoparticles: Degradable Polyphosphoesters and Anisotropic Distribution of Chemical Functionalities

    Science.gov (United States)

    Zhang, Shiyi

    conjugate by densely attaching the polyphosphoester block with azide-functionalized Paclitaxel by azide-alkyne Huisgen cycloaddition. This Paclitaxel drug conjugate provides a powerful platform for combinational cancer therapy and bioimaging due to its ultra-high Paclitaxel loading (> 65 wt%), high water solubility (>6.2 mg/mL for PTX) and easy functionalization. Another polyphosphoester-based nanoparticle system has been developed by a programmable process for the rapid and facile preparation of a family of nanoparticles with different surface charges and functionalities. The non-ionic, anionic, cationic and zwitterionic nanoparticles with hydrodynamic diameters between 13 nm to 21 nm and great size uniformity could be rapidly prepared from small molecules in 6 h or 2 days. The anionic and zwitterionic nanoparticles were designed to load silver ions to treat pulmonary infections, while the cationic nanoparticles are being applied to regulate lung injuries by serving as a degradable iNOS inhibitor conjugates. In addition, a direct synthesis of acid-labile polyphosphoramidate by organobase-catalyzed ring-opening polymerization and an improved two-step preparation of polyphosphoester ionomer by acid-assisted cleavage of phosphoramidate bonds on polyphosphoramidate were developed. Polyphosphoramidate and polyphosphoester ionomers may be applied to many applications, due to their unique chemical and physical properties.

  10. Factors Affecting Infestation by Triatoma infestans in a Rural Area of the Humid Chaco in Argentina: A Multi-Model Inference Approach

    Science.gov (United States)

    Gurevitz, Juan M.; Ceballos, Leonardo A.; Gaspe, María Sol; Alvarado-Otegui, Julián A.; Enríquez, Gustavo F.; Kitron, Uriel; Gürtler, Ricardo E.

    2011-01-01

    Background Transmission of Trypanosoma cruzi by Triatoma infestans remains a major public health problem in the Gran Chaco ecoregion, where understanding of the determinants of house infestation is limited. We conducted a cross-sectional study to model factors affecting bug presence and abundance at sites within house compounds in a well-defined rural area in the humid Argentine Chaco. Methodology/Principal Findings Triatoma infestans bugs were found in 45.9% of 327 inhabited house compounds but only in 7.4% of the 2,584 sites inspected systematically on these compounds, even though the last insecticide spraying campaign was conducted 12 years before. Infested sites were significantly aggregated at distances of 0.8–2.5 km. The most frequently infested ecotopes were domiciles, kitchens, storerooms, chicken coops and nests; corrals were rarely infested. Domiciles with mud walls and roofs of thatch or corrugated tarred cardboard were more often infested (32.2%) than domiciles with brick-and-cement walls and corrugated metal-sheet roofs (15.1%). A multi-model inference approach using Akaike's information criterion was applied to assess the relative importance of each variable by running all possible (17,406) models resulting from all combinations of variables. Availability of refuges for bugs, construction with tarred cardboard, and host abundance (humans, dogs, cats, and poultry) per site were positively associated with infestation and abundance, whereas reported insecticide use showed a negative association. Ethnic background (Creole or Toba) adjusted for other factors showed little or no association. Conclusions/Significance Promotion and effective implementation of housing improvement (including key peridomestic structures) combined with appropriate insecticide use and host management practices are needed to eliminate infestations. Fewer refuges are likely to result in fewer residual foci after insecticide spraying, and will facilitate community-based vector

  11. Climate change effects on wildland fire risk in the Northeastern and Great Lakes states predicted by a downscaled multi-model ensemble

    NARCIS (Netherlands)

    Kerr, Gaige Hunter; DeGaetano, Arthur T.; Stoof, Cathelijne R.; Ward, Daniel

    2018-01-01

    This study is among the first to investigate wildland fire risk in the Northeastern and the Great Lakes states under a changing climate. We use a multi-model ensemble (MME) of regional climate models from the Coordinated Regional Downscaling Experiment (CORDEX) together with the Canadian Forest

  12. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    Science.gov (United States)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  13. Multi-model ensemble estimation of volume transport through the straits of the East/Japan Sea

    Science.gov (United States)

    Han, Sooyeon; Hirose, Naoki; Usui, Norihisa; Miyazawa, Yasumasa

    2016-01-01

    The volume transports measured at the Korea/Tsushima, Tsugaru, and Soya/La Perouse Straits remain quantitatively inconsistent. However, data assimilation models at least provide a self-consistent budget despite subtle differences among the models. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate more accurately transport at these straits by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.43 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.

  14. Model predictive controller-based multi-model control system for longitudinal stability of distributed drive electric vehicle.

    Science.gov (United States)

    Shi, Ke; Yuan, Xiaofang; Liu, Liang

    2018-01-01

    Distributed drive electric vehicle(DDEV) has been widely researched recently, its longitudinal stability is a very important research topic. Conventional wheel slip ratio control strategies are usually designed for one special operating mode and the optimal performance cannot be obtained as DDEV works under various operating modes. In this paper, a novel model predictive controller-based multi-model control system (MPC-MMCS) is proposed to solve the longitudinal stability problem of DDEV. Firstly, the operation state of DDEV is summarized as three kinds of typical operating modes. A submodel set is established to accurately represent the state value of the corresponding operating mode. Secondly, the matching degree between the state of actual DDEV and each submodel is analyzed. The matching degree is expressed as the weight coefficient and calculated by a modified recursive Bayes theorem. Thirdly, a nonlinear MPC is designed to achieve the optimal wheel slip ratio for each submodel. The optimal design of MPC is realized by parallel chaos optimization algorithm(PCOA)with computational accuracy and efficiency. Finally, the control output of MPC-MMCS is computed by the weighted output of each MPC to achieve smooth switching between operating modes. The proposed MPC-MMCS is evaluated on eight degrees of freedom(8DOF)DDEV model simulation platform and simulation results of different condition show the benefits of the proposed control system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken [Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Nakamachi (Japan)

    2010-12-15

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)

  17. Predictability of Precipitation Over the Conterminous U.S. Based on the CMIP5 Multi-Model Ensemble

    Science.gov (United States)

    Jiang, Mingkai; Felzer, Benjamin S.; Sahagian, Dork

    2016-01-01

    Characterizing precipitation seasonality and variability in the face of future uncertainty is important for a well-informed climate change adaptation strategy. Using the Colwell index of predictability and monthly normalized precipitation data from the Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model ensembles, this study identifies spatial hotspots of changes in precipitation predictability in the United States under various climate scenarios. Over the historic period (1950–2005), the recurrent pattern of precipitation is highly predictable in the East and along the coastal Northwest, and is less so in the arid Southwest. Comparing the future (2040–2095) to the historic period, larger changes in precipitation predictability are observed under Representative Concentration Pathways (RCP) 8.5 than those under RCP 4.5. Finally, there are region-specific hotspots of future changes in precipitation predictability, and these hotspots often coincide with regions of little projected change in total precipitation, with exceptions along the wetter East and parts of the drier central West. Therefore, decision-makers are advised to not rely on future total precipitation as an indicator of water resources. Changes in precipitation predictability and the subsequent changes on seasonality and variability are equally, if not more, important factors to be included in future regional environmental assessment. PMID:27425819

  18. Multi-model assessment of the impact of soil moisture initialization on mid-latitude summer predictability

    Science.gov (United States)

    Ardilouze, Constantin; Batté, L.; Bunzel, F.; Decremer, D.; Déqué, M.; Doblas-Reyes, F. J.; Douville, H.; Fereday, D.; Guemas, V.; MacLachlan, C.; Müller, W.; Prodhomme, C.

    2017-12-01

    Land surface initial conditions have been recognized as a potential source of predictability in sub-seasonal to seasonal forecast systems, at least for near-surface air temperature prediction over the mid-latitude continents. Yet, few studies have systematically explored such an influence over a sufficient hindcast period and in a multi-model framework to produce a robust quantitative assessment. Here, a dedicated set of twin experiments has been carried out with boreal summer retrospective forecasts over the 1992-2010 period performed by five different global coupled ocean-atmosphere models. The impact of a realistic versus climatological soil moisture initialization is assessed in two regions with high potential previously identified as hotspots of land-atmosphere coupling, namely the North American Great Plains and South-Eastern Europe. Over the latter region, temperature predictions show a significant improvement, especially over the Balkans. Forecast systems better simulate the warmest summers if they follow pronounced dry initial anomalies. It is hypothesized that models manage to capture a positive feedback between high temperature and low soil moisture content prone to dominate over other processes during the warmest summers in this region. Over the Great Plains, however, improving the soil moisture initialization does not lead to any robust gain of forecast quality for near-surface temperature. It is suggested that models biases prevent the forecast systems from making the most of the improved initial conditions.

  19. Robust driver heartbeat estimation: A q-Hurst exponent based automatic sensor change with interactive multi-model EKF.

    Science.gov (United States)

    Vrazic, Sacha

    2015-08-01

    Preventing car accidents by monitoring the driver's physiological parameters is of high importance. However, existing measurement methods are not robust to driver's body movements. In this paper, a system that estimates the heartbeat from the seat embedded piezoelectric sensors, and that is robust to strong body movements is presented. Multifractal q-Hurst exponents are used within a classifier to predict the most probable best sensor signal to be used in an Interactive Multi-Model Extended Kalman Filter pulsation estimation procedure. The car vibration noise is reduced using an autoregressive exogenous model to predict the noise on sensors. The performance of the proposed system was evaluated on real driving data up to 100 km/h and with slaloms at high speed. It is shown that this method improves by 36.7% the pulsation estimation under strong body movement compared to static sensor pulsation estimation and appears to provide reliable pulsation variability information for top-level analysis of drowsiness or other conditions.

  20. Plato's patricide in the sophist

    Directory of Open Access Journals (Sweden)

    Deretić Irina J.

    2012-01-01

    Full Text Available In this paper, the author attempts to elucidate validity of Plato's criticism of Parmenides' simplified monistic ontology, as well as his concept of non-being. In contrast to Parmenides, Plato introduces a more complex ontology of the megista gene and redefines Parmenides' concept of non-being as something absolutely different from being. According to Plato, not all things are in the same sense, i. e. they have the different ontological status. Additionally, he redefines Parmenides' concept of absolute non-being as 'difference' or 'otherness.' .

  1. Sophisticated fuel handling system evolved

    International Nuclear Information System (INIS)

    Ross, D.A.

    1988-01-01

    The control systems at Sellafield fuel handling plant are described. The requirements called for built-in diagnostic features as well as the ability to handle a large sequencing application. Speed was also important; responses better than 50ms were required. The control systems are used to automate operations within each of the three main process caves - two Magnox fuel decanners and an advanced gas-cooled reactor fuel dismantler. The fuel route within the fuel handling plant is illustrated and described. ASPIC (Automated Sequence Package for Industrial Control) which was developed as a controller for the plant processes is described. (U.K.)

  2. A framework for the cross-sectoral integration of multi-model impact projections: land use decisions under climate impacts uncertainties

    Science.gov (United States)

    Frieler, K.; Levermann, A.; Elliott, J.; Heinke, J.; Arneth, A.; Bierkens, M. F. P.; Ciais, P.; Clark, D. B.; Deryng, D.; Döll, P.; Falloon, P.; Fekete, B.; Folberth, C.; Friend, A. D.; Gellhorn, C.; Gosling, S. N.; Haddeland, I.; Khabarov, N.; Lomas, M.; Masaki, Y.; Nishina, K.; Neumann, K.; Oki, T.; Pavlick, R.; Ruane, A. C.; Schmid, E.; Schmitz, C.; Stacke, T.; Stehfest, E.; Tang, Q.; Wisser, D.; Huber, V.; Piontek, F.; Warszawski, L.; Schewe, J.; Lotze-Campen, H.; Schellnhuber, H. J.

    2015-07-01

    Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impact-model setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making. Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation

  3. A Framework for the Cross-Sectoral Integration of Multi-Model Impact Projections: Land Use Decisions Under Climate Impacts Uncertainties

    Science.gov (United States)

    Frieler, K.; Elliott, Joshua; Levermann, A.; Heinke, J.; Arneth, A.; Bierkens, M. F. P.; Ciais, P.; Clark, D. B.; Deryng, D.; Doll, P.; hide

    2015-01-01

    Climate change and its impacts already pose considerable challenges for societies that will further increase with global warming (IPCC, 2014a, b). Uncertainties of the climatic response to greenhouse gas emissions include the potential passing of large-scale tipping points (e.g. Lenton et al., 2008; Levermann et al., 2012; Schellnhuber, 2010) and changes in extreme meteorological events (Field et al., 2012) with complex impacts on societies (Hallegatte et al., 2013). Thus climate change mitigation is considered a necessary societal response for avoiding uncontrollable impacts (Conference of the Parties, 2010). On the other hand, large-scale climate change mitigation itself implies fundamental changes in, for example, the global energy system. The associated challenges come on top of others that derive from equally important ethical imperatives like the fulfilment of increasing food demand that may draw on the same resources. For example, ensuring food security for a growing population may require an expansion of cropland, thereby reducing natural carbon sinks or the area available for bio-energy production. So far, available studies addressing this problem have relied on individual impact models, ignoring uncertainty in crop model and biome model projections. Here, we propose a probabilistic decision framework that allows for an evaluation of agricultural management and mitigation options in a multi-impactmodel setting. Based on simulations generated within the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP), we outline how cross-sectorally consistent multi-model impact simulations could be used to generate the information required for robust decision making. Using an illustrative future land use pattern, we discuss the trade-off between potential gains in crop production and associated losses in natural carbon sinks in the new multiple crop- and biome-model setting. In addition, crop and water model simulations are combined to explore irrigation

  4. It's the parameters, stupid! Moving beyond multi-model and multi-physics approaches to characterize and reduce predictive uncertainty in process-based hydrological models

    Science.gov (United States)

    Clark, Martyn; Samaniego, Luis; Freer, Jim

    2014-05-01

    Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic

  5. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  6. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Directory of Open Access Journals (Sweden)

    K. Ichii

    2010-07-01

    Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  7. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  8. An automated multi-model based evapotranspiration estimation framework for understanding crop-climate interactions in India

    Science.gov (United States)

    Bhattarai, N.; Jain, M.; Mallick, K.

    2017-12-01

    A remote sensing based multi-model evapotranspiration (ET) estimation framework is developed using MODIS and NASA Merra-2 reanalysis data for data poor regions, and we apply this framework to the Indian subcontinent. The framework eliminates the need for in-situ calibration data and hence estimates ET completely from space and is replicable across all regions in the world. Currently, six surface energy balance models ranging from widely-used SEBAL, METRIC, and SEBS to moderately-used S-SEBI, SSEBop, and a relatively new model, STIC1.2 are being integrated and validated. Preliminary analysis suggests good predictability of the models for estimating near- real time ET under clear sky conditions from various crop types in India with coefficient of determination 0.32-0.55 and percent bias -15%-28%, when compared against Bowen Ratio based ET estimates. The results are particularly encouraging given that no direct ground input data were used in the analysis. The framework is currently being extended to estimate seasonal ET across the Indian subcontinent using a model-ensemble approach that uses all available MODIS 8-day datasets since 2000. These ET products are being used to monitor inter-seasonal and inter-annual dynamics of ET and crop water use across different crop and irrigation practices in India. Particularly, the potential impacts of changes in precipitation patterns and extreme heat (e.g., extreme degree days) on seasonal crop water consumption is being studied. Our ET products are able to locate the water stress hotspots that need to be targeted with water saving interventions to maintain agricultural production in the face of climate variability and change.

  9. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  10. High-resolution multi-model projections of onshore wind resources over Portugal under a changing climate

    Science.gov (United States)

    Nogueira, Miguel; Soares, Pedro M. M.; Tomé, Ricardo; Cardoso, Rita M.

    2018-05-01

    We present a detailed evaluation of wind energy density (WED) over Portugal, based on the EURO-CORDEX database of high-resolution regional climate model (RCM) simulations. Most RCMs showed reasonable accuracy in reproducing the observed near-surface wind speed. The climatological patterns of WED displayed large sub-regional heterogeneity, with higher values over coastal regions and steep orography. Subsequently, we investigated the future changes of WED throughout the twenty-first century, considering mid- and end-century periods, and two emission scenarios (RCP4.5 and RCP8.5). On the yearly average, the multi-model ensemble WED changes were below 10% (15%) under RCP4.5 (RCP8.5). However, the projected WED anomalies displayed strong seasonality, dominated by low positive values in summer (< 10% for both scenarios), negative values in winter and spring (up to - 10% (- 20%) under RCP4.5 (RCP8.5)), and stronger negative anomalies in autumn (up to - 25% (- 35%) under RCP4.5 (RCP8.5)). These projected WED anomalies displayed large sub-regional variability. The largest reductions (and lowest increases) are linked to the northern and central-eastern elevated terrain, and the southwestern coast. In contrast, the largest increases (and lowest reductions) are linked to the central-western orographic features of moderate elevation. The projections also showed changes in inter-annual variability of WED, with small increases for annual averages, but with distinct behavior when considering year-to-year variability over a specific season: small increases in winter, larger increases in summer, slight decrease in autumn, and no relevant change in spring. The changes in inter-annual variability also displayed strong dependence on the underlying terrain. Finally, we found significant model spread in the magnitude of projected WED anomalies and inter-annual variability, affecting even the signal of the changes.

  11. Diagnosing sea ice from the north american multi model ensemble and implications on mid-latitude winter climate

    Science.gov (United States)

    Elders, Akiko; Pegion, Kathy

    2017-12-01

    Arctic sea ice plays an important role in the climate system, moderating the exchange of energy and moisture between the ocean and the atmosphere. An emerging area of research investigates how changes, particularly declines, in sea ice extent (SIE) impact climate in regions local to and remote from the Arctic. Therefore, both observations and model estimates of sea ice become important. This study investigates the skill of sea ice predictions from models participating in the North American Multi-Model Ensemble (NMME) project. Three of the models in this project provide sea-ice predictions. The ensemble average of these models is used to determine seasonal climate impacts on surface air temperature (SAT) and sea level pressure (SLP) in remote regions such as the mid-latitudes. It is found that declines in fall SIE are associated with cold temperatures in the mid-latitudes and pressure patterns across the Arctic and mid-latitudes similar to the negative phase of the Arctic Oscillation (AO). These findings are consistent with other studies that have investigated the relationship between declines in SIE and mid-latitude weather and climate. In an attempt to include additional NMME models for sea-ice predictions, a proxy for SIE is used to estimate ice extent in the remaining models, using sea surface temperature (SST). It is found that SST is a reasonable proxy for SIE estimation when compared to model SIE forecasts and observations. The proxy sea-ice estimates also show similar relationships to mid-latitude temperature and pressure as the actual sea-ice predictions.

  12. Impacts of C-uptake by plants on the spatial distribution of 14C accumulated in vegetation around a nuclear facility-Application of a sophisticated land surface 14C model to the Rokkasho reprocessing plant, Japan.

    Science.gov (United States)

    Ota, Masakazu; Katata, Genki; Nagai, Haruyasu; Terada, Hiroaki

    2016-10-01

    The impacts of carbon uptake by plants on the spatial distribution of radiocarbon ( 14 C) accumulated in vegetation around a nuclear facility were investigated by numerical simulations using a sophisticated land surface 14 C model (SOLVEG-II). In the simulation, SOLVEG-II was combined with a mesoscale meteorological model and an atmospheric dispersion model. The model combination was applied to simulate the transfer of 14 CO 2 and to assess the radiological impact of 14 C accumulation in rice grains during test operations of the Rokkasho reprocessing plant (RRP), Japan, in 2007. The calculated 14 C-specific activities in rice grains agreed with the observed activities in paddy fields around the RRP within a factor of four. The annual effective dose delivered from 14 C in the rice grain was estimated to be less than 0.7 μSv, only 0.07% of the annual effective dose limit of 1 mSv for the public. Numerical experiments of hypothetical continuous atmospheric 14 CO 2 release from the RRP showed that the 14 C-specific activities of rice plants at harvest differed from the annual mean activities in the air. The difference was attributed to seasonal variations in the atmospheric 14 CO 2 concentration and the growth of the rice plant. Accumulation of 14 C in the rice plant significantly increased when 14 CO 2 releases were limited during daytime hours, compared with the results observed during the nighttime. These results indicated that plant growth stages and diurnal photosynthesis should be considered in predictions of the ingestion dose of 14 C for long-term chronic releases and short-term diurnal releases of 14 CO 2 , respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Forecast Combinations

    OpenAIRE

    Timmermann, Allan G

    2005-01-01

    Forecast combinations have frequently been found in empirical studies to produce better forecasts on average than methods based on the ex-ante best individual forecasting model. Moreover, simple combinations that ignore correlations between forecast errors often dominate more refined combination schemes aimed at estimating the theoretically optimal combination weights. In this paper we analyse theoretically the factors that determine the advantages from combining forecasts (for example, the d...

  14. Stochastic identification of temperature effects on the dynamics of a smart composite beam: assessment of multi-model and global model approaches

    International Nuclear Information System (INIS)

    Hios, J D; Fassois, S D

    2009-01-01

    The temperature effects on the dynamics of a smart composite beam are experimentally studied via conventional multi-model and novel global model identification approaches. The multi-model approaches are based on non-parametric and parametric VARX representations, whereas the global model approaches are based on novel constant coefficient pooled (CCP) and functionally pooled (FP) VARX parametric representations. The analysis indicates that the obtained multi-model and global model representations are in rough overall agreement. Nevertheless, the latter simultaneously use all available data records offering more compact descriptions of the dynamics, improved numerical robustness and estimation accuracy, which is reflected in significantly reduced modal parameter uncertainties. Although the CCP-VARX representations provide only 'averaged' descriptions of the structural dynamics over temperature, their FP-VARX counterparts allow for the explicit, analytical modeling of temperature dependence exhibiting a 'smooth' deterministic dependence of the dynamics on temperature which is compatible with the physics of the problem. In accordance with previous studies, the obtained natural frequencies decrease with temperature in a weakly nonlinear or approximately linear fashion. The damping factors are less affected, although their dependence on temperature may be of a potentially more complex nature

  15. Validation of precipitation over Japan during 1985-2004 simulated by three regional climate models and two multi-model ensemble means

    Energy Technology Data Exchange (ETDEWEB)

    Ishizaki, Yasuhiro [Meteorological Research Institute, Tsukuba (Japan); National Institute for Environmental Studies, Tsukuba (Japan); Nakaegawa, Toshiyuki; Takayabu, Izuru [Meteorological Research Institute, Tsukuba (Japan)

    2012-07-15

    We dynamically downscaled Japanese reanalysis data (JRA-25) for 60 regions of Japan using three regional climate models (RCMs): the Non-Hydrostatic Regional Climate Model (NHRCM), modified RAMS version 4.3 (NRAMS), and modified Weather Research and Forecasting model (TWRF). We validated their simulations of the precipitation climatology and interannual variations of summer and winter precipitation. We also validated precipitation for two multi-model ensemble means: the arithmetic ensemble mean (AEM) and an ensemble mean weighted according to model reliability. In the 60 regions NRAMS simulated both the winter and summer climatological precipitation better than JRA-25, and NHRCM simulated the wintertime precipitation better than JRA-25. TWRF, however, overestimated precipitation in the 60 regions in both the winter and summer, and NHRCM overestimated precipitation in the summer. The three RCMs simulated interannual variations, particularly summer precipitation, better than JRA-25. AEM simulated both climatological precipitation and interannual variations during the two seasons more realistically than JRA-25 and the three RCMs overall, but the best RCM was often superior to the AEM result. In contrast, the weighted ensemble mean skills were usually superior to those of the best RCM. Thus, both RCMs and multi-model ensemble means, especially multi-model ensemble means weighted according to model reliability, are powerful tools for simulating seasonal and interannual variability of precipitation in Japan under the current climate. (orig.)

  16. Carbon dioxide and climate impulse response functions for the computation of greenhouse gas metrics: a multi-model analysis

    Directory of Open Access Journals (Sweden)

    F. Joos

    2013-03-01

    Full Text Available The responses of carbon dioxide (CO2 and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP and Global Temperature change Potential (GTP, to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%. The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence lies within the range of (68 to 117 × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and

  17. Multi-model assessment of health impacts of air pollution in Europe and the U.S.

    Science.gov (United States)

    Im, Ulas; Brandt, Jørgen; Christensen, Jesper H.; Geels, Camilla; Hansen, Kaj M.; Andersen, Mikael S.; Solazzo, Efisio; Hogrefe, Christian; Galmarini, Stefano

    2017-04-01

    According to the World Health Organization (WHO), air pollution is now the world's largest single environmental health risk. Assessments of health impacts and the associated external costs related to air pollution are estimated based on observed and/or modelled air pollutant levels. Chemistry and transport models (CTMs) are useful tools to calculate the concentrations of health-related pollutants taking into account the non-linearities in the chemistry and the complex interactions between meteorology and chemistry. However, the CTMs include different chemical and aerosol schemes that introduce differences in the representation of the processes. Likewise, will differences in the emissions and boundary conditions used in the models add to the overall uncertainties. These uncertainties are introduced also into the health impact estimates using output from the CTMs. Multi-model (MM) ensembles can be useful to minimize these uncertainties introduced by the individual CTMs. In the present study, the simulated surface concentrations of health related air pollutants for the year 2010 from fifteen modelling groups participating in the AQMEII exercise, serve as input to the Economic Valuation of Air Pollution model (EVA), in order to calculate the impacts of these pollutants on human health and the associated external costs in Europe and U.S. In addition, the impacts of a 20% global emission reduction scenario on the human health and associated costs have been calculated. Preliminary results show that in Europe and U.S., the MM mean number of premature deaths due to air pollution is calculated to be 400 000 and 160 000, respectively. Estimated health impacts among different models can vary up to a factor of 3 and 1.2 in Europe and U.S., respectively. PM is calculated to be the major pollutant affecting the health impacts and the differences in models regarding the treatment of aerosol composition, physics and dynamics is a key factor. The total MM mean costs due to health

  18. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  19. A study on the influence of eWOM using content analysis: how do comments on value for money, product sophistication and experiential feeling affect our choices?

    Science.gov (United States)

    Cho, Vincent; Chan, Alpha

    2017-07-01

    The influence of electronic word of mouth (eWOM) has been heavily investigated in relation to online ratings. However, only a few studies examined the content of eWOM. From the perspective of the consideration sets model, consumers formulate an awareness set, a consideration set and a choice set before making a purchase. We argue that the formulation of these sets is influenced by eWOM based on its volume, valance and content relating to product attributes such as value for money, product sophistication and experiential feeling. In this study, the content of posts relating to Shure professional earphones in the online forum Mingo (www.mingo-hmw.com/forum) was captured and annotated. During the data collection period, Mingo was the sole online forum relating to professional earphones. Without much interference from other online forums, the circumstances of this study closely approximate a laboratory setting. In addition, we collected the actual sales, marketing costs, fault rates and number of retail stores selling the Shure professional earphones for 126 weeks. Our findings show that the weekly volume of posts, their relative number of positive (negative) comments, especially regarding value for money and sound quality, and those posts from the earlier week impinged strongly on weekly sales of Shure products. From the regression models, the explained variance in sales jumps from 0.236 to 0.732 due to the influence of eWOM.

  20. Critical confrontation of standard and more sophisticated methods for modelling the dispersion in air of heavy gas clouds; evaluation and illustration of the intrinsic limitations of both categories

    International Nuclear Information System (INIS)

    Riethmuller, M.L.

    1983-01-01

    Mathematical models of gas dispersion have evolved drastically since the 1930's. For a long time, the most widely used approach was the so-called Gaussian model as described in practical terms by Turner or box models which have shown relative merits. In the field of heavy gas dispersion, the use of such approaches appeared somewhat limited and therefore new models have been proposed. Some of these new generation models were making use of the latest progress in turbulence modelling as derived from laboratory work as well as numerical advances. The advent of faster and larger computers made possible the development of three dimensional codes that were computing both flow field and gas dispersion taking into account details of the ground obstacles, heat exchange and possibly phase changes as well. The description of these new types of models makes them appear as a considerable improvement over the simpler approaches. However, recent comparisons between many of these have led to the conclusion that the scatter between predictions attained with sophisticated models was just as large as with other ones. It seems therefore, that current researchers might have fallen into the trap of confusing mathematical precision with accuracy. It is therefore felt necessary to enlighten this question by an investigation which, rather than comparing individual models, would analyse the key features of both approaches and put in evidence their relative merits and degree of realism when being really applied

  1. Global water balances reconstructed by multi-model offline simulations of land surface models under GSWP3 (Invited)

    Science.gov (United States)

    Oki, T.; KIM, H.; Ferguson, C. R.; Dirmeyer, P.; Seneviratne, S. I.

    2013-12-01

    As the climate warms, the frequency and severity of flood and drought events is projected to increase. Understanding the role that the land surface will play in reinforcing or diminishing these extremes at regional scales will become critical. In fact, the current development path from atmospheric (GCM) to coupled atmosphere-ocean (AOGCM) to fully-coupled dynamic earth system models (ESMs) has brought new awareness to the climate modeling community of the abundance of uncertainty in land surface parameterizations. One way to test the representativeness of a land surface scheme is to do so in off-line (uncoupled) mode with controlled, high quality meteorological forcing. When multiple land schemes are run in-parallel (with the same forcing data), an inter-comparison of their outputs can provide the basis for model confidence estimates and future model refinements. In 2003, the Global Soil Wetness Project Phase 2 (GSWP2) provided the first global multi-model analysis of land surface state variables and fluxes. It spanned the decade of 1986-1995. While it was state-of-the art at the time, physical schemes have since been enhanced, a number of additional processes and components in the water-energy-eco-systems nexus can now be simulated, , and the availability of global, long-term observationally-based datasets that can be used for forcing and validating models has grown. Today, the data exists to support century-scale off-line experiments. The ongoing follow-on to GSWP2, named GSWP3, capitalizes on these new feasibilities and model functionalities. The project's cornerstone is its century-scale (1901-2010), 3-hourly, 0.5° meteorological forcing dataset that has been dynamically downscaled from the Twentieth Century Reanalysis and bias-corrected using monthly Climate Research Unit (CRU) temperature and Global Precipitation Climatology Centre (GPCC) precipitation data. However, GSWP3 also has an important long-term future climate component that spans the 21st century

  2. Multi-model Mean Nitrogen and Sulfur Deposition from the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP): Evaluation Historical and Projected Changes

    Science.gov (United States)

    Lamarque, J.-F.; Dentener, F.; McConnell, J.; Ro, C.-U.; Shaw, M.; Vet, R.; Bergmann, D.; Cameron-Smith, P.; Doherty, R.; Faluvegi, G.; hide

    2013-01-01

    We present multi-model global datasets of nitrogen and sulfate deposition covering time periods from 1850 to 2100, calculated within the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). The computed deposition fluxes are compared to surface wet deposition and ice-core measurements. We use a new dataset of wet deposition for 2000-2002 based on critical assessment of the quality of existing regional network data. We show that for present-day (year 2000 ACCMIP time-slice), the ACCMIP results perform similarly to previously published multi-model assessments. For this time slice, we find a multi-model mean deposition of 50 Tg(N) yr1 from nitrogen oxide emissions, 60 Tg(N) yr1 from ammonia emissions, and 83 Tg(S) yr1 from sulfur emissions. The analysis of changes between 1980 and 2000 indicates significant differences between model and measurements over the United States but less so over Europe. This difference points towards misrepresentation of 1980 NH3 emissions over North America. Based on ice-core records, the 1850 deposition fluxes agree well with Greenland ice cores but the change between 1850 and 2000 seems to be overestimated in the Northern Hemisphere for both nitrogen and sulfur species. Using the Representative Concentration Pathways to define the projected climate and atmospheric chemistry related emissions and concentrations, we find large regional nitrogen deposition increases in 2100 in Latin America, Africa and parts of Asia under some of the scenarios considered. Increases in South Asia are especially large, and are seen in all scenarios, with 2100 values more than double 2000 in some scenarios and reaching 1300 mg(N) m2 yr1 averaged over regional to continental scale regions in RCP 2.6 and 8.5, 3050 larger than the values in any region currently (2000). The new ACCMIP deposition dataset provides novel, consistent and evaluated global gridded deposition fields for use in a wide range of climate and ecological studies.

  3. combination Dictionary

    African Journals Online (AJOL)

    rbr

    of the idiomatic expression as a whole" (Crystal 2003: 225-226). Idiomatic .... nations and idioms.11 Nonetheless, free combinations of words have not been .... those who thronged Emmett place last night wanted to see the film, and they.

  4. Winning Combinations

    DEFF Research Database (Denmark)

    Criscuolo, Paola; Laursen, Keld; Reichstein, Toke

    2018-01-01

    examine the effectiveness of different combinations of knowledge sources for achieving innovative performance. We suggest that combinations involving integrative search strategies – combining internal and external knowledge – are the most likely to generate product and process innovation. In this context......, we present the idea that cognitively distant knowledge sources are helpful for innovation only when used in conjunction with knowledge sources that are closer to the focal firm. We also find important differences between product and process innovation, with the former associated with broader searches......Searching for the most rewarding sources of innovative ideas remains a key challenge in management of technological innovation. Yet, little is known about which combinations of internal and external knowledge sources are triggers for innovation. Extending theories about searching for innovation, we...

  5. Future Temperatures and Precipitations in the Arid Northern-Central Chile: A Multi-Model Downscaling Approach

    Science.gov (United States)

    Souvignet, M.; Heinrich, J.

    2010-03-01

    Downscaling of global climate outputs is necessary to transfer projections of potential climate change scenarios to local levels. This is of special interest to dry mountainous areas, which are particularly vulnerable to climate change due to risks of reduced freshwater availability. These areas play a key role for hydrology since they usually receive the highest local precipitation rates stored in form of snow and glaciers. In the central-northern Chile (Norte Chico, 26-33ºS), where agriculture still serves as a backbone of the economy as well as ensures the well being of people, the knowledge of water resources availability is essential. The region is characterised by a semiarid climate with a mean annual precipitation inferior to 100mm. Moreover, the local climate is also highly influenced by the ENSO phenomenon, which accounts for the strong inter-annual variability in precipitation patterns. Although historical and spatially extensive precipitation data in the headwaters of the basins in this region are not readily available, records at coastal stations show worrisome trends. For instance, the average precipitation in La Serena, the most important city located in the Coquimbo Region, has decreased dramatically in the past 100 years. The 30-year monthly average has decreased from 170 mm in the early 20th century to values less than 80 mm nowadays. Climate Change is expected to strengthen this pattern in the region, and therefore strongly influence local hydrological patterns. The objectives of this study are i) to develop climate change scenarios (2046-2099) for the Norte Chico using multi-model predictions in terms of temperatures and precipitations, and ii) to compare the efficiency of two downscaling techniques in arid mountainous regions. In addition, this study aims at iii) providing decision makers with sound analysis of potential impact of Climate Change on streamflow in the region. For the present study, future local climate scenarios were developed

  6. A multi-model assessment of the co-benefits of climate mitigation for global air quality

    NARCIS (Netherlands)

    Rao, Shilpa; Klimont, Zbigniew; Leitao, Joana; Riahi, Keywan; van Dingenen, Rita; Aleluia Reis, Lara; Calvin, Katherine; Dentener, Frank; Drouet, Laurent; Fujimori, Shinichiro; Harmsen, Mathijs; Luderer, Gunnar; Heyes, Chris; Strefler, Jessica; Tavoni, Massimo; van Vuuren, Detlef P.

    2016-01-01

    We present a model comparison study that combines multiple integrated assessment models with a reduced-form global air quality model to assess the potential co-benefits of global climate mitigation policies in relation to the World Health Organization (WHO) goals on air quality and health. We

  7. Report on activities and findings under DOE grant “Collaborative research. An Interactive Multi-Model for Consensus on Climate Change”

    Energy Technology Data Exchange (ETDEWEB)

    Duane, Gregory S. [Univ. of Colorado, Boulder, CO (United States); Tsonis, Anastasios [Univ. of Wisconsin, Madison, WI (United States); Kocarev, Ljupco [Univ. of California, San Diego, CA (United States); Tribbia, Joseph [National Center for Atmospheric Research, Boulder, CO (United States)

    2015-10-30

    for inter-model nudging using the DART (Data Assimilation Research Testbed) capability to stop and re-start models in synchrony. It was clearly established that the inter-model nudging adds almost no computational burden to the runs, but there appears to be a problem with the re-initialization software that is still being debugged. Publications: Several papers were published on the basic idea of the interactive multi-model (supermodel) including demonstrations with low-order ODEs. The last of these, a semi-philosophical review paper on the relevance of synchronization generally, encountered considerable resistance but was finally published in Entropy [Duane 2015]. A paper on the ECHAM/COSMOS supermodel, containing the most promising results so far [Shen et al. 2015] is presently under review.

  8. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  9. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  10. Comparing reconstructed past variations and future projections of the Baltic Sea ecosystem—first results from multi-model ensemble simulations

    DEFF Research Database (Denmark)

    Meier, H E Markus; Andersson, Helén C; Arheimer, Berit

    2012-01-01

    Multi-model ensemble simulations for the marine biogeochemistry and food web of the Baltic Sea were performed for the period 1850–2098, and projected changes in the future climate were compared with the past climate environment. For the past period 1850–2006, atmospheric, hydrological and nutrient...... forcings were reconstructed, based on historical measurements. For the future period 1961–2098, scenario simulations were driven by regionalized global general circulation model (GCM) data and forced by various future greenhouse gas emission and air- and riverborne nutrient load scenarios (ranging from...... a pessimistic ‘business-as-usual’ to the most optimistic case). To estimate uncertainties, different models for the various parts of the Earth system were applied. Assuming the IPCC greenhouse gas emission scenarios A1B or A2, we found that water temperatures at the end of this century may be higher...

  11. Forecast combinations

    OpenAIRE

    Aiolfi, Marco; Capistrán, Carlos; Timmermann, Allan

    2010-01-01

    We consider combinations of subjective survey forecasts and model-based forecasts from linear and non-linear univariate specifications as well as multivariate factor-augmented models. Empirical results suggest that a simple equal-weighted average of survey forecasts outperform the best model-based forecasts for a majority of macroeconomic variables and forecast horizons. Additional improvements can in some cases be gained by using a simple equal-weighted average of survey and model-based fore...

  12. Combined homicide

    Directory of Open Access Journals (Sweden)

    Slović Živana

    2017-01-01

    Full Text Available Introduction: Combined homicide is a combination of two or more different modes of killing. These homicides occur when multiple perpetrators have different mode of killing, to hide the true manner of death, or when an initially unsuccessful attack with one weapon is abandoned and changed by another mode which is more successful, or due to availability of weapons at the scene of homicide, or unexpected appearance of possible eyewitness, or else. Case report: This case report is about 65-year old woman who was found in her residence on the floor next to the bed lying on her back with two kitchen knives in her neck. Autopsy revealed an abrasion on the frontal part of the neck and a bruise of the soft tissues of the neck with a double fracture of both greater horns of the hyoid bone and a fracture of both superior horns of the thyroid cartilage. The cause of death was exsanguination into right half of the thoracic cavity from the left subclavian artery which was cut, on the spot of stab wound in the neck. Conclusion: Hemorrhage in the soft tissue near broken hyoid bone and thyroid cartilage indicate that the victim was first strangulated and then stabbed with kitchen knives. Combined homicides are caused by one or more killers in order to accelerate the killing, or to be sure to provide the fatal outcome. This case is also interesting because the killer left weapon in the victim's neck.

  13. Hydrometeorological multi-model ensemble simulations of the 4 November 2011 flash flood event in Genoa, Italy, in the framework of the DRIHM project

    Directory of Open Access Journals (Sweden)

    A. Hally

    2015-03-01

    Full Text Available The e-Science environment developed in the framework of the EU-funded DRIHM project was used to demonstrate its ability to provide relevant, meaningful hydrometeorological forecasts. This was illustrated for the tragic case of 4 November 2011, when Genoa, Italy, was flooded as the result of heavy, convective precipitation that inundated the Bisagno catchment. The Meteorological Model Bridge (MMB, an innovative software component developed within the DRIHM project for the interoperability of meteorological and hydrological models, is a key component of the DRIHM e-Science environment. The MMB allowed three different rainfall-discharge models (DRiFt, RIBS and HBV to be driven by four mesoscale limited-area atmospheric models (WRF-NMM, WRF-ARW, Meso-NH and AROME and a downscaling algorithm (RainFARM in a seamless fashion. In addition to this multi-model configuration, some of the models were run in probabilistic mode, thus giving a comprehensive account of modelling errors and a very large amount of likely hydrometeorological scenarios (> 1500. The multi-model approach proved to be necessary because, whilst various aspects of the event were successfully simulated by different models, none of the models reproduced all of these aspects correctly. It was shown that the resulting set of simulations helped identify key atmospheric processes responsible for the large rainfall accumulations over the Bisagno basin. The DRIHM e-Science environment facilitated an evaluation of the sensitivity to atmospheric and hydrological modelling errors. This showed that both had a significant impact on predicted discharges, the former being larger than the latter. Finally, the usefulness of the set of hydrometeorological simulations was assessed from a flash flood early-warning perspective.

  14. Multi-model Mean Nitrogen and Sulfur Deposition from the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP): Evaluation of Historical and Projected Future Changes

    Energy Technology Data Exchange (ETDEWEB)

    Lamarque, Jean-Francois; Dentener, Frank; McConnell, J.R.; Ro, C-U; Shaw, Mark; Vet, Robert; Bergmann, D.; Cameron-Smith, Philip; Dalsoren, S.; Doherty, R.; Faluvegi, G.; Ghan, Steven J.; Josse, B.; Lee, Y. H.; MacKenzie, I. A.; Plummer, David; Shindell, Drew; Skeie, R. B.; Stevenson, D. S.; Strode, S.; Zeng, G.; Curran, M.; Dahl-Jensen, D.; Das, S.; Fritzsche, D.; Nolan, M.

    2013-08-20

    We present multi-model global datasets of nitrogen and sulfate deposition covering time periods from 1850 to 2100, calculated within the Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). The computed deposition fluxes are compared to surface wet deposition and ice-core measurements. We use a new dataset of wet deposition for 2000-2002 based on critical assessment of the quality of existing regional network data. We show that for present-day (year 2000 ACCMIP time-slice), the ACCMIP results perform similarly to previously published multi-model assessments. The analysis of changes between 1980 and 2000 indicates significant differences between model and measurements over the United States, but less so over Europe. This difference points towards misrepresentation of 1980 NH3 emissions over North America. Based on ice-core records, the 1850 deposition fluxes agree well with Greenland ice cores but the change between 1850 and 2000 seems to be overestimated in the Northern Hemisphere for both nitrogen and sulfur species. Using the Representative Concentration Pathways to define the projected climate and atmospheric chemistry related emissions and concentrations, we find large regional nitrogen deposition increases in 2100 in Latin America, Africa and parts of Asia under some of the scenarios considered. Increases in South Asia are especially large, and are seen in all scenarios, with 2100 values more than double 2000 in some scenarios and reaching >1300 mgN/m2/yr averaged over regional to continental scale regions in RCP 2.6 and 8.5, ~30-50% larger than the values in any region currently (2000). Despite known issues, the new ACCMIP deposition dataset provides novel, consistent and evaluated global gridded deposition fields for use in a wide range of climate and ecological studies.

  15. Estudo teórico das transições eletrônicas usando métodos simples e sofisticados Theoretical study of electronic transitions using simple and sophisticated methods

    Directory of Open Access Journals (Sweden)

    Nelson H. Morgon

    2013-01-01

    Full Text Available In this paper, the use of both simple and sophisticated models in the study of electronic transitions was explored for a set of molecular systems: C2H4, C4H4, C4H6, C6H6, C6H8, "C8", C60, and [H2NCHCH(CHCHkCHNH2]+, where k = 0 to 4. The simple model of the free particle (1D, 2D, and 3D boxes, rings or spherical surfaces, considering the boundary conditions, was found to yield similar results to the sophisticated theoretical methods such as EOM-CCSD/6-311++G** or TD(NStates=5,Root=1-M06-2X/6-311++G**.

  16. A Sophisticated Architecture Is Indeed Necessary for the Implementation of Health in All Policies but not Enough Comment on "Understanding the Role of Public Administration in Implementing Action on the Social Determinants of Health and Health Inequities".

    Science.gov (United States)

    Breton, Eric

    2016-02-29

    In this commentary, I argue that beyond a sophisticated supportive architecture to facilitate implementation of actions on the social determinants of health (SDOH) and health inequities, the Health in All Policies (HiAP) project faces two main barriers: lack of awareness within policy networks on the social determinants of population health, and a tendency of health actors to neglect investing in other sectors' complex problems. © 2016 by Kerman University of Medical Sciences.

  17. Multi-model ensemble simulations of olive pollen distribution in Europe in 2014: current status and outlook

    Directory of Open Access Journals (Sweden)

    M. Sofiev

    2017-10-01

    Full Text Available The paper presents the first modelling experiment of the European-scale olive pollen dispersion, analyses the quality of the predictions, and outlines the research needs. A 6-model strong ensemble of Copernicus Atmospheric Monitoring Service (CAMS was run throughout the olive season of 2014, computing the olive pollen distribution. The simulations have been compared with observations in eight countries, which are members of the European Aeroallergen Network (EAN. Analysis was performed for individual models, the ensemble mean and median, and for a dynamically optimised combination of the ensemble members obtained via fusion of the model predictions with observations. The models, generally reproducing the olive season of 2014, showed noticeable deviations from both observations and each other. In particular, the season was reported to start too early by 8 days, but for some models the error mounted to almost 2 weeks. For the end of the season, the disagreement between the models and the observations varied from a nearly perfect match up to 2 weeks too late. A series of sensitivity studies carried out to understand the origin of the disagreements revealed the crucial role of ambient temperature and consistency of its representation by the meteorological models and heat-sum-based phenological model. In particular, a simple correction to the heat-sum threshold eliminated the shift of the start of the season but its validity in other years remains to be checked. The short-term features of the concentration time series were reproduced better, suggesting that the precipitation events and cold/warm spells, as well as the large-scale transport, were represented rather well. Ensemble averaging led to more robust results. The best skill scores were obtained with data fusion, which used the previous days' observations to identify the optimal weighting coefficients of the individual model forecasts. Such combinations were tested for the forecasting

  18. A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex

    Science.gov (United States)

    Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter

    2017-11-01

    Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard

  19. Constrained quadratic stabilization of discrete-time uncertain nonlinear multi-model systems using piecewise affine state-feedback

    Directory of Open Access Journals (Sweden)

    Olav Slupphaug

    1999-07-01

    Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.

  20. Application of multi-model control with fuzzy switching to a micro hydro-electrical power plant

    Energy Technology Data Exchange (ETDEWEB)

    Salhi, Issam; Doubabi, Said [Laboratory of Electric Systems and Telecommunications (LEST), Faculty of Sciences and Technologies of Marrakesh, Cadi Ayyad University, BP 549, Av Abdelkarim Elkhattabi, Gueliz, Marrakesh (Morocco); Essounbouli, Najib; Hamzaoui, Abdelaziz [CReSTIC, Reims University, 9, rue de Quebec B.P. 396, F-10026 Troyes cedex (France)

    2010-09-15

    Modelling hydraulic turbine generating systems is not an easy task because they are non-linear and uncertain where the operating points are time varying. One way to overcome this problem is to use Takagi-Sugeno (TS) models, which offer the possibility to apply some tools from linear control theory, whereas those models are composed of linear models connected by a fuzzy activation function. This paper presents an approach to model and control a micro hydro power plant considered as a non-linear system using TS fuzzy systems. A TS fuzzy system with local models is used to obtain a global model of the studied plant. Then, to combine efficiency and simplicity of design, PI controllers are synthesised for each considered operating point to be used as conclusion of an electrical load TS Fuzzy controller. The latter ensures the global stability and desired performance despite the change of operating point. The proposed approach (model and controller) is tested on a laboratory prototype, where the obtained results show their efficiency and their capability to ensure good performance despite the non-linear nature of the plant. (author)

  1. A multi-model approach to monitor emissions of CO2 and CO from an urban–industrial complex

    Directory of Open Access Journals (Sweden)

    I. Super

    2017-11-01

    Full Text Available Monitoring urban–industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry and a Gaussian plume model (Operational Priority Substances – OPS. The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban–industrial complex (Rotterdam, the Netherlands towards rural conditions for October–December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled, but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05. For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm−1, which agrees

  2. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: a multi model linear feedback analysis

    International Nuclear Information System (INIS)

    Roy, Tilla; Bopp, Laurent; Gehlen, Marion; Cadule, Patricia; Schneider, Birgit; Frolicher, Thomas L.; Segschneider, Joachim; Tjiputra, Jerry; Heinze, Christoph; Joos, Fortunat

    2011-01-01

    The increase in atmospheric CO 2 over this century depends on the evolution of the oceanic air-sea CO 2 uptake, which will be driven by the combined response to rising atmospheric CO 2 itself and climate change. Here, the future oceanic CO 2 uptake is simulated using an ensemble of coupled climate-carbon cycle models. The models are driven by CO 2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high-emission scenario. A linear feedback analysis successfully separates the regional future (2010-2100) oceanic CO 2 uptake into a CO 2 -induced component, due to rising atmospheric CO 2 concentrations, and a climate-induced component, due to global warming. The models capture the observation based magnitude and distribution of anthropogenic CO 2 uptake. The distributions of the climate-induced component are broadly consistent between the models, with reduced CO 2 uptake in the sub polar Southern Ocean and the equatorial regions, owing to decreased CO 2 solubility; and reduced CO 2 uptake in the mid-latitudes, owing to decreased CO 2 solubility and increased vertical stratification. The magnitude of the climate-induced component is sensitive to local warming in the southern extra-tropics, to large freshwater fluxes in the extra-tropical North Atlantic Ocean, and to small changes in the CO 2 solubility in the equatorial regions. In key anthropogenic CO 2 uptake regions, the climate-induced component offsets the CO 2 - induced component at a constant proportion up until the end of this century. This amounts to approximately 50% in the northern extra-tropics and 25% in the southern extra-tropics and equatorial regions. Consequently, the detection of climate change impacts on anthropogenic CO 2 uptake may be difficult without monitoring additional tracers, such as oxygen. (authors)

  3. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: a multi model linear feedback analysis

    International Nuclear Information System (INIS)

    Roy, Tilla; Bopp, Laurent; Gehlen, Marion; Cadule, Patricia

    2011-01-01

    The increase in atmospheric CO 2 over this century depends on the evolution of the oceanic air-sea CO 2 uptake, which will be driven by the combined response to rising atmospheric CO 2 itself and climate change. Here, the future oceanic CO 2 uptake is simulated using an ensemble of coupled climate-carbon cycle models. The models are driven by CO 2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high-emission scenario. A linear feedback analysis successfully separates the regional future (2010-2100) oceanic CO 2 uptake into a CO 2 -induced component, due to rising atmospheric CO 2 concentrations, and a climate-induced component, due to global warming. The models capture the observation based magnitude and distribution of anthropogenic CO 2 uptake. The distributions of the climate-induced component are broadly consistent between the models, with reduced CO 2 uptake in the sub-polar Southern Ocean and the equatorial regions, owing to decreased CO 2 solubility; and reduced CO 2 uptake in the mid latitudes, owing to decreased CO 2 solubility and increased vertical stratification. The magnitude of the climate-induced component is sensitive to local warming in the southern extra tropics, to large freshwater fluxes in the extra tropical North Atlantic Ocean, and to small changes in the CO 2 solubility in the equatorial regions. In key anthropogenic CO 2 uptake regions, the climate-induced component offsets the CO 2 - induced component at a constant proportion up until the end of this century. This amounts to approximately 50% in the northern extra tropics and 25% in the southern extra tropics and equatorial regions. Consequently, the detection of climate change impacts on anthropogenic CO 2 uptake may be difficult without monitoring additional tracers, such as oxygen. (authors)

  4. Energy technology roll-out for climate change mitigation: A multi-model study for Latin America

    Energy Technology Data Exchange (ETDEWEB)

    van der Zwaan, Bob; Kober, Tom; Calderon, Silvia; Clarke, Leon; Daenzer, Katie; Kitous, Alban; Labriet, Maryse; Lucena, André F. P.; Octaviano, Claudia; Di Sbroiavacca, Nicolas

    2016-05-01

    In this paper we investigate opportunities for energy technology deployment under climate change mitigation efforts in Latin America. Through several carbon tax and CO2 abatement scenarios until 2050 we analyze what resources and technologies, notably for electricity generation, could be cost-optimal in the energy sector to significantly reduce CO2 emissions in the region. By way of sensitivity test we perform a cross-model comparison study and inspect whether robust conclusions can be drawn across results from different models as well as different types of models (general versus partial equilibrium). Given the abundance of biomass resources in Latin America, they play a large role in energy supply in all scenarios we inspect. This is especially true for stringent climate policy scenarios, for instance because the use of biomass in power plants in combination with CCS can yield negative CO2 emissions. We find that hydropower, which today contributes about 800 TWh to overall power production in Latin America, could be significantly expanded to meet the climate policies we investigate, typically by about 50%, but potentially by as much as 75%. According to all models, electricity generation increases exponentially with a two- to three-fold expansion between 2010 and 2050.Wefind that in our climate policy scenarios renewable energy overall expands typically at double-digit growth rates annually, but there is substantial spread in model results for specific options such as wind and solar power: the climate policies that we simulate raise wind power in 2050 on average to half the production level that hydropower provides today, while they raise solar power to either a substantially higher or a much lower level than hydropower supplies at present, depending on which model is used. Also for CCS we observe large diversity in model outcomes, which reflects the uncertainties with regard to its future implementation potential as a result of

  5. Assessment and economic valuation of air pollution impacts on human health over Europe and the United States as calculated by a multi-model ensemble in the framework of AQMEII3

    Science.gov (United States)

    The impact of air pollution on human health and the associated external costs in Europe and the United States (US) for the year 2010 are modeled by a multi-model ensemble of regional models in the frame of the third phase of the Air Quality Modelling Evaluation International Init...

  6. Otolith reading and multi-model inference for improved estimation of age and growth in the gilthead seabream Sparus aurata (L.)

    Science.gov (United States)

    Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.

    2011-05-01

    Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model

  7. Measured and simulated effects of sophisticated drainage techniques on groundwater level and runoff hydrochemistry in areas of boreal acid sulphate soils

    Directory of Open Access Journals (Sweden)

    I. BÄRLUND

    2008-12-01

    Full Text Available To abate the environmental problems caused by the severe acidity and high metal concentrations in rivers draining acid sulphate (AS soils of Western Finland, control drainage (CD and lime filter drainage (LFD, and their combination, were investigated. The effectiveness of these best management practices (BMP’s on drainage water quality was studied on plot scale in two locations. In Ilmajoki, where the sulphidic materials are more than 2 m below the soil surface, CD efficiently reduced the concentrations of sulphate, aluminium, manganese and iron concentrations and to some extent also increased the pH of the drainage waters. LFD, in contrast, effectively reduced the drainage water acidity and raised the pH level. Decrease of the groundwater level owing to strong evapotranspiration in summer could, however, not be properly prevented by CD. In Mustasaari where sulphidic materials were as shallow as 1 m below soil surface, the positive effects of LFD recognised in Ilmajoki were hardly seen. This shows, that the tested BMP’s work properly, and can thus be recommended, for intensively artificially drained AS soils like in Ilmajoki where most of the acidity has already been transported to watercourses. LFD can, however, not be recommended for as yet poorly leached and thus particularly problematic AS soils like in Mustasaari. This is, of course, a drawback of the tested BMP, as it is not effective for the soils which would need it most. The field data were tentatively utilised to test the performance of the HAPSU (Ionic Flow Model for Acid Sulphate Soils simulation model developed to estimate the loads of harmful substances from AS soils.;

  8. Making of epistemologically sophisticated physics teachers: A cross-sequential study of epistemological progression from preservice to in-service teachers

    Science.gov (United States)

    Ding, Lin; Zhang, Ping

    2016-12-01

    Previous literature on learners' epistemological beliefs about physics has almost exclusively focused on analysis of university classroom instruction and its effects on students' views. However, little is known about other populations or factors other than classroom instruction on learners' epistemologies. In this study, we used a cross-sequential method, combining both longitudinal and cross-sectional designs, to investigate an epistemological progression trend from preservice to in-service teachers. Six cohorts of participants were studied, who either were then attending or had completed an undergraduate teacher preparation program in physics at a major Chinese university. These cohorts were incoming freshmen, end-of-year freshmen, end-of-year sophomores, end-of-year juniors, end-of-year seniors, and 1st-year high school physics teachers who were about to enter the 2nd year of teaching. We used the Colorado Learning Attitudes about Science Survey (CLASS) as both a pretest and a post-test to gauge the changes in the participants' epistemological views over an entire academic year. Follow-up interviews were also conducted to explore factors responsible for such changes. Results showed that the epistemological trend as measured by CLASS did not increase monotonically. Instead, there was a decrease in the epistemological trend among the incoming freshmen in their first year undergraduate studies, followed by a long stasis until the end of the senior year. Then, there was a rebound for the end-of-year seniors in their 1st year of teaching, followed by another plateau. Interviews revealed that the competitive learning environment, increased content difficulty, and unfamiliar pedagogies in college were major factors that negatively influenced incoming freshmen's views about physics. Conversely, a role change from student to teacher and relatively easy content in high school positively impacted end-of-year seniors' views about physics and learning.

  9. Making of epistemologically sophisticated physics teachers: A cross-sequential study of epistemological progression from preservice to in-service teachers

    Directory of Open Access Journals (Sweden)

    Lin Ding

    2016-11-01

    Full Text Available Previous literature on learners’ epistemological beliefs about physics has almost exclusively focused on analysis of university classroom instruction and its effects on students’ views. However, little is known about other populations or factors other than classroom instruction on learners’ epistemologies. In this study, we used a cross-sequential method, combining both longitudinal and cross-sectional designs, to investigate an epistemological progression trend from preservice to in-service teachers. Six cohorts of participants were studied, who either were then attending or had completed an undergraduate teacher preparation program in physics at a major Chinese university. These cohorts were incoming freshmen, end-of-year freshmen, end-of-year sophomores, end-of-year juniors, end-of-year seniors, and 1st-year high school physics teachers who were about to enter the 2nd year of teaching. We used the Colorado Learning Attitudes about Science Survey (CLASS as both a pretest and a post-test to gauge the changes in the participants’ epistemological views over an entire academic year. Follow-up interviews were also conducted to explore factors responsible for such changes. Results showed that the epistemological trend as measured by CLASS did not increase monotonically. Instead, there was a decrease in the epistemological trend among the incoming freshmen in their first year undergraduate studies, followed by a long stasis until the end of the senior year. Then, there was a rebound for the end-of-year seniors in their 1st year of teaching, followed by another plateau. Interviews revealed that the competitive learning environment, increased content difficulty, and unfamiliar pedagogies in college were major factors that negatively influenced incoming freshmen’s views about physics. Conversely, a role change from student to teacher and relatively easy content in high school positively impacted end-of-year seniors’ views about physics and

  10. Climate change effects on wildland fire risk in the Northeastern and Great Lakes states predicted by a downscaled multi-model ensemble

    Science.gov (United States)

    Kerr, Gaige Hunter; DeGaetano, Arthur T.; Stoof, Cathelijne R.; Ward, Daniel

    2018-01-01

    This study is among the first to investigate wildland fire risk in the Northeastern and the Great Lakes states under a changing climate. We use a multi-model ensemble (MME) of regional climate models from the Coordinated Regional Downscaling Experiment (CORDEX) together with the Canadian Forest Fire Weather Index System (CFFWIS) to understand changes in wildland fire risk through differences between historical simulations and future projections. Our results are relatively homogeneous across the focus region and indicate modest increases in the magnitude of fire weather indices (FWIs) during northern hemisphere summer. The most pronounced changes occur in the date of the initialization of CFFWIS and peak of the wildland fire season, which in the future are trending earlier in the year, and in the significant increases in the length of high-risk episodes, defined by the number of consecutive days with FWIs above the current 95th percentile. Further analyses show that these changes are most closely linked to expected changes in the focus region's temperature and precipitation. These findings relate to the current understanding of particulate matter vis-à-vis wildfires and have implications for human health and local and regional changes in radiative forcings. When considering current fire management strategies which could be challenged by increasing wildland fire risk, fire management agencies could adapt new strategies to improve awareness, prevention, and resilience to mitigate potential impacts to critical infrastructure and population.

  11. Future changes in the climatology of the Great Plains low-level jet derived from fine resolution multi-model simulations.

    Science.gov (United States)

    Tang, Ying; Winkler, Julie; Zhong, Shiyuan; Bian, Xindi; Doubler, Dana; Yu, Lejiang; Walters, Claudia

    2017-07-10

    The southerly Great Plains low-level jet (GPLLJ) is one of the most significant circulation features of the central U.S. linking large-scale atmospheric circulation with the regional climate. GPLLJs transport heat and moisture, contribute to thunderstorm and severe weather formation, provide a corridor for the springtime migration of birds and insects, enhance wind energy availability, and disperse air pollution. We assess future changes in GPLLJ frequency using an eight member ensemble of dynamically-downscaled climate simulations for the mid-21st century. Nocturnal GPLLJ frequency is projected to increase in the southern plains in spring and in the central plains in summer, whereas current climatological patterns persist into the future for daytime and cool season GPLLJs. The relationship between future GPLLJ frequency and the extent and strength of anticyclonic airflow over eastern North America varies with season. Most simulations project a westward shift of anticyclonic airflow in summer, but uncertainty is larger for spring with only half of the simulations suggesting a westward expansion. The choice of regional climate model and the driving lateral boundary conditions have a large influence on the projected future changes in GPLLJ frequency and highlight the importance of multi-model ensembles to estimate the uncertainty surrounding the future GPLLJ climatology.

  12. Overview of the Special Issue: A Multi-Model Framework to Achieve Consistent Evaluation of Climate Change Impacts in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus; DeAngelo, B. J.; McFarland, Jim; Jantarasami, Lesley; Shouse, Kate C.; Crimmins, Allison; Ohrel, Sara; Li, Jia

    2015-07-01

    The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets of 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.

  13. Skill of real-time operational forecasts with the APCC multi-model ensemble prediction system during the period 2008-2015

    Science.gov (United States)

    Min, Young-Mi; Kryjov, Vladimir N.; Oh, Sang Myeong; Lee, Hyun-Ju

    2017-12-01

    This paper assesses the real-time 1-month lead forecasts of 3-month (seasonal) mean temperature and precipitation on a monthly basis issued by the Asia-Pacific Economic Cooperation Climate Center (APCC) for 2008-2015 (8 years, 96 forecasts). It shows the current level of the APCC operational multi-model prediction system performance. The skill of the APCC forecasts strongly depends on seasons and regions that it is higher for the tropics and boreal winter than for the extratropics and boreal summer due to direct effects and remote teleconnections from boundary forcings. There is a negative relationship between the forecast skill and its interseasonal variability for both variables and the forecast skill for precipitation is more seasonally and regionally dependent than that for temperature. The APCC operational probabilistic forecasts during this period show a cold bias (underforecasting of above-normal temperature and overforecasting of below-normal temperature) underestimating a long-term warming trend. A wet bias is evident for precipitation, particularly in the extratropical regions. The skill of both temperature and precipitation forecasts strongly depends upon the ENSO strength. Particularly, the highest forecast skill noted in 2015/2016 boreal winter is associated with the strong forcing of an extreme El Nino event. Meanwhile, the relatively low skill is associated with the transition and/or continuous ENSO-neutral phases of 2012-2014. As a result the skill of real-time forecast for boreal winter season is higher than that of hindcast. However, on average, the level of forecast skill during the period 2008-2015 is similar to that of hindcast.

  14. Multi-model analysis of terrestrial carbon cycles in Japan: reducing uncertainties in model outputs among different terrestrial biosphere models using flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2009-08-01

    Terrestrial biosphere models show large uncertainties when simulating carbon and water cycles, and reducing these uncertainties is a priority for developing more accurate estimates of both terrestrial ecosystem statuses and future climate changes. To reduce uncertainties and improve the understanding of these carbon budgets, we investigated the ability of flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine-based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and an improved model (based on calibration using flux observations). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using flux observations (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs, and model calibration using flux observations significantly improved the model outputs. These results show that to reduce uncertainties among terrestrial biosphere models, we need to conduct careful validation and calibration with available flux observations. Flux observation data significantly improved terrestrial biosphere models, not only on a point scale but also on spatial scales.

  15. Using constructed analogs to improve the skill of National Multi-Model Ensemble March–April–May precipitation forecasts in equatorial East Africa

    International Nuclear Information System (INIS)

    Shukla, Shraddhanand; Funk, Christopher; Hoell, Andrew

    2014-01-01

    In this study we implement and evaluate a simple ‘hybrid’ forecast approach that uses constructed analogs (CA) to improve the National Multi-Model Ensemble’s (NMME) March–April–May (MAM) precipitation forecasts over equatorial eastern Africa (hereafter referred to as EA, 2°S to 8°N and 36°E to 46°E). Due to recent declines in MAM rainfall, increases in population, land degradation, and limited technological advances, this region has become a recent epicenter of food insecurity. Timely and skillful precipitation forecasts for EA could help decision makers better manage their limited resources, mitigate socio-economic losses, and potentially save human lives. The ‘hybrid approach’ described in this study uses the CA method to translate dynamical precipitation and sea surface temperature (SST) forecasts over the Indian and Pacific Oceans (specifically 30°S to 30°N and 30°E to 270°E) into terrestrial MAM precipitation forecasts over the EA region. In doing so, this approach benefits from the post-1999 teleconnection that exists between precipitation and SSTs over the Indian and tropical Pacific Oceans (Indo-Pacific) and EA MAM rainfall. The coupled atmosphere-ocean dynamical forecasts used in this study were drawn from the NMME. We demonstrate that while the MAM precipitation forecasts (initialized in February) skill of the NMME models over the EA region itself is negligible, the ranked probability skill score of hybrid CA forecasts based on Indo-Pacific NMME precipitation and SST forecasts reach up to 0.45. (letter)

  16. High accuracy navigation information estimation for inertial system using the multi-model EKF fusing adams explicit formula applied to underwater gliders.

    Science.gov (United States)

    Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian

    2017-01-01

    The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Purchasing portfolio usage and purchasing sophistication

    NARCIS (Netherlands)

    Gelderman, C.J.; Weele, van A.J.

    2005-01-01

    Purchasing portfolio models have caused considerable controversy in literature. Many advantages and disadvantages have been put forward, revealing a strong disagreement on the merits of portfolio models. This study addresses the question whether or not the use of purchasing portfolio models should

  18. STAF: A Powerful and Sophisticated CAI System.

    Science.gov (United States)

    Loach, Ken

    1982-01-01

    Describes the STAF (Science Teacher's Authoring Facility) computer-assisted instruction system developed at Leeds University (England), focusing on STAF language and major program features. Although programs for the system emphasize physical chemistry and organic spectroscopy, the system and language are general purpose and can be used in any…

  19. Butler's sophisticated constructivism: A critical assessment

    NARCIS (Netherlands)

    Vasterling, V.L.M.

    1999-01-01

    This paper aims to investigate whether and in what respects the conceptions of the body and of agency that Judith Butler develops in Bodies That Matter are useful contributions to feminist theory. The discussion focuses on the clarification and critical assessment of the arguments Butler presents to

  20. Rising Trend: Complex and sophisticated attack methods

    Indian Academy of Sciences (India)

    Stux, DuQu, Nitro, Luckycat, Exploit Kits, FLAME. ADSL/SoHo Router Compromise. Botnets of compromised ADSL/SoHo Routers; User Redirection via malicious DNS entry. Web Application attacks. SQL Injection, RFI etc. More and more Webshells. More utility to hackers; Increasing complexity and evading mechanisms.

  1. Sophisticated digestive systems in early arthropods.

    Science.gov (United States)

    Vannier, Jean; Liu, Jianni; Lerosey-Aubril, Rudy; Vinther, Jakob; Daley, Allison C

    2014-05-02

    Understanding the way in which animals diversified and radiated during their early evolutionary history remains one of the most captivating of scientific challenges. Integral to this is the 'Cambrian explosion', which records the rapid emergence of most animal phyla, and for which the triggering and accelerating factors, whether environmental or biological, are still unclear. Here we describe exceptionally well-preserved complex digestive organs in early arthropods from the early Cambrian of China and Greenland with functional similarities to certain modern crustaceans and trace these structures through the early evolutionary lineage of fossil arthropods. These digestive structures are assumed to have allowed for more efficient digestion and metabolism, promoting carnivory and macrophagy in early arthropods via predation or scavenging. This key innovation may have been of critical importance in the radiation and ecological success of Arthropoda, which has been the most diverse and abundant invertebrate phylum since the Cambrian.

  2. Endothelial microparticles: Sophisticated vesicles modulating vascular function

    Science.gov (United States)

    Curtis, Anne M; Edelberg, Jay; Jonas, Rebecca; Rogers, Wade T; Moore, Jonni S; Syed, Wajihuddin; Mohler, Emile R

    2015-01-01

    Endothelial microparticles (EMPs) belong to a family of extracellular vesicles that are dynamic, mobile, biological effectors capable of mediating vascular physiology and function. The release of EMPs can impart autocrine and paracrine effects on target cells through surface interaction, cellular fusion, and, possibly, the delivery of intra-vesicular cargo. A greater understanding of the formation, composition, and function of EMPs will broaden our understanding of endothelial communication and may expose new pathways amenable for therapeutic manipulation. PMID:23892447

  3. Rising Trend: Complex and sophisticated attack methods

    Indian Academy of Sciences (India)

    Increased frequency and intensity of DoS/DDoS. Few Gbps is now normal; Anonymous VPNs being used; Botnets being used as a vehicle for launching DDoS attacks. Large scale booking of domain names. Hundred thousands of domains registered in short duration via few registrars; Single registrant; Most of the domains ...

  4. A multi-model intercomparison of halogenated very short-lived substances (TransCom-VSLS): linking oceanic emissions and tropospheric transport for a reconciled estimate of the stratospheric source gas injection of bromine

    OpenAIRE

    Hossaini, R.; Patra, P. K.; Leeson, A. A.; Krysztofiak, G.; Abraham, N. L.; Andrews, S. J.; Archibald, A. T.; Aschmann, J.; Atlas, E. L.; Belikov, D. A.; Bonisch, H.; Carpenter, L. J.; Dhomse, S.; Dorf, M.; Engel, A.

    2016-01-01

    The first concerted multi-model intercomparison of halogenated very short-lived substances (VSLS) has been performed, within the framework of the ongoing Atmospheric Tracer Transport Model Intercomparison Project (TransCom). Eleven global models or model variants participated (nine chemical transport models and two chemistry–climate models) by simulating the major natural bromine VSLS, bromoform (CHBr3) and dibromomethane (CH2Br2), over a 20-year period (1993–2012). Except f...

  5. Combining Conversation Analysis and Nexus Analysis to explore hospital practices

    DEFF Research Database (Denmark)

    Paasch, Bettina Sletten

    , ethnographic observations, interviews, photos and documents were obtained. Inspired by the analytical manoeuvre of zooming in and zooming out proposed by Nicolini (Nicolini, 2009; Nicolini, 2013) the present study uses Conversations Analysis (Sacks, Schegloff, & Jefferson, 1974) and Embodied Interaction...... of interaction. In the conducted interviews nurses report mobile work phones to disturb interactions with patients when they ring, however, analysing the recorded interactions with tools from Conversations Analysis and Embodied Interaction Analysis displays how nurses demonstrate sophisticated awareness...... interrelationships influencing it. The present study thus showcases how Conversation Analysis and Nexus Analysis can be combined to achieve a multi-layered perspective on interactions between nurses, patients and mobile work phones....

  6. Towards uncertainty estimation for operational forecast products - a multi-model-ensemble approach for the North Sea and the Baltic Sea

    Science.gov (United States)

    Golbeck, Inga; Li, Xin; Janssen, Frank

    2014-05-01

    Several independent operational ocean models provide forecasts of the ocean state (e.g. sea level, temperature, salinity and ice cover) in the North Sea and the Baltic Sea on a daily basis. These forecasts are the primary source of information for a variety of information and emergency response systems used e.g. to issue sea level warnings or carry out oil drift forecast. The forecasts are of course highly valuable as such, but often suffer from a lack of information on their uncertainty. With the aim of augmenting the existing operational ocean forecasts in the North Sea and the Baltic Sea by a measure of uncertainty a multi-model-ensemble (MME) system for sea surface temperature (SST), sea surface salinity (SSS) and water transports has been set up in the framework of the MyOcean-2 project. Members of MyOcean-2, the NOOS² and HIROMB/BOOS³ communities provide 48h-forecasts serving as inputs. Different variables are processed separately due to their different physical characteristics. Based on the so far collected daily MME products of SST and SSS, a statistical method, Empirical Orthogonal Function (EOF) analysis is applied to assess their spatial and temporal variability. For sea surface currents, progressive vector diagrams at specific points are consulted to estimate the performance of the circulation models especially in hydrodynamic important areas, e.g. inflow/outflow of the Baltic Sea, Norwegian trench and English Channel. For further versions of the MME system, it is planned to extend the MME to other variables like e.g. sea level, ocean currents or ice cover based on the needs of the model providers and their customers. It is also planned to include in-situ data to augment the uncertainty information and for validation purposes. Additionally, weighting methods will be implemented into the MME system to develop more complex uncertainty measures. The methodology used to create the MME will be outlined and different ensemble products will be presented. In

  7. Best convective parameterization scheme within RegCM4 to downscale CMIP5 multi-model data for the CORDEX-MENA/Arab domain

    Science.gov (United States)

    Almazroui, Mansour; Islam, Md. Nazrul; Al-Khalaf, A. K.; Saeed, Fahad

    2016-05-01

    temperature (reaching up to -1.16 °C). Overall, a suitable option (GLEO wet) is recommended for downscaling the Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model database using RegCM4 for the CORDEX-MENA/Arab domain for its use in future climate change impact studies.

  8. The discount framing in different pricing schemes: Combined versus partitioned pricing

    OpenAIRE

    Matthew Lee; Dr. Frankie Law

    2015-01-01

    Pricing is one of the most sophisticated and critical issues which managers have to face. It is obvious that managers have been undervaluing the behavioural and psychological perspective of pricing for many years. With a clear understanding of behavioural pricing, managers are able to make extra profit for their firms. In the current study, it was interesting to investigating exactly how manipulation of discounts in the combined pricing scheme and partitioned pricing scheme affects the purcha...

  9. Impactos da sofisticação logística de empresas industriais nas motivações para terceirização Impact of industrial companies' sophisticated logistics on outsourcing

    Directory of Open Access Journals (Sweden)

    Peter Wanke

    2004-12-01

    Full Text Available O objetivo desta pesquisa é avaliar o impacto das diferentes dimensões de sofisticação da organização logística das empresas brasileiras do setor industrial, nas motivações para a terceirização de atividades logísticas. Para isto, foram definidas e operacionalizadas, com base em revisão de literatura, variáveis relacionadas à sofisticação da organização logística e aos principais motivos relacionados à decisão de terceirização. Foram enviados questionários para 218 empresas do setor industrial, listadas no Ranking da Revista Exame. A partir dos 93 questionários recebidos, foi possível identificar dois grupos distintos de empresas e suas diferentes motivações para a terceirização: (1 empresas com maiores níveis de formalização organizacional e baixos níveis de adoção de tecnologias de informação; e (2 empresas com menores níveis de formalização organizacional e adoção intensiva de tecnologias de informação. Os resultados são discutidos sob o prisma de oportunidades para um posicionamento mais adequado dos prestadores de serviço logístico, na oferta de seus serviços a estes dois grupos de empresas.An evaluation was made to identify how the different degrees of sophistication in the logistical organization of Brazilian industrial companies affect their decision to outsource logistic services. To this end, based on a review of the literature, variables relating to the sophistication of logistic organization and the main reasons for deciding to outsource were defined. 218 questionnaires were mailed to industrial companies listed in Exame magazine. The 93 companies that filled out the questionnaire were divided into two groups based on their reasons for outsourcing logistic services: (1 companies with high levels of formal organization and low levels of IT use, and (2 companies with low levels of formal organization but high levels of IT use. The findings are discussed from the standpoint of

  10. The 1-way on-line coupled atmospheric chemistry model system MECO(n – Part 2: On-line coupling with the Multi-Model-Driver (MMD

    Directory of Open Access Journals (Sweden)

    A. Kerkweg

    2012-01-01

    Full Text Available A new, highly flexible model system for the seamless dynamical down-scaling of meteorological and chemical processes from the global to the meso-γ scale is presented. A global model and a cascade of an arbitrary number of limited-area model instances run concurrently in the same parallel environment, in which the coarser grained instances provide the boundary data for the finer grained instances. Thus, disk-space intensive and time consuming intermediate and pre-processing steps are entirely avoided and the time interpolation errors of common off-line nesting approaches are minimised. More specifically, the regional model COSMO of the German Weather Service (DWD is nested on-line into the atmospheric general circulation model ECHAM5 within the Modular Earth Submodel System (MESSy framework. ECHAM5 and COSMO have previously been equipped with the MESSy infrastructure, implying that the same process formulations (MESSy submodels are available for both models. This guarantees the highest degree of achievable consistency, between both, the meteorological and chemical conditions at the domain boundaries of the nested limited-area model, and between the process formulations on all scales.

    The on-line nesting of the different models is established by a client-server approach with the newly developed Multi-Model-Driver (MMD, an additional component of the MESSy infrastructure. With MMD an arbitrary number of model instances can be run concurrently within the same message passing interface (MPI environment, the respective coarser model (either global or regional is the server for the nested finer (regional client model, i.e. it provides the data required to calculate the initial and boundary fields to the client model. On-line nesting means that the coupled (client-server models exchange their data via the computer memory, in contrast to the data exchange via files on disk in common off-line nesting approaches. MMD consists of a library

  11. Combining data in non-destructive testing

    International Nuclear Information System (INIS)

    Lavayssiere, B.

    1994-03-01

    Non-destructive testing of some components requires quite often the use of several methods such as X-ray, ultrasonics, Eddy Currents. But the efficiency of a NDT method is highly dependent on the fact that the detectability of flaws in a specimen relies on the choice of the best method. Moreover a lot of inspection issues could benefit from the use of more than one test method, as each NDT method has its own physical properties and technological limits. Some questions still remain: how to combine data, at what level and for what functionality. Simple monomethod processes are well-known now. They include techniques like reconstruction which belongs to the so-called ill-posed problems in the field of mathematics. For NDT data processing, it has the ability to estimate real data from distorted ones coming from a probe. But, up to now there has been very few approaches for computer aided combination of results from different advanced techniques. This report presents the various mathematical fields involved towards that goal (statistical decision theory which allows the use of multiple hypothesis, non-linear decision theory for its capability to classify and to discriminate, graph theory to find the optimal path in an hypothesis graph and also fuzzy logic, multiple resolution analysis, artificial intelligence,...) and which combinations of methods are useful. Some images will illustrate this topic in which EDF is involved, and will explain what are the major goals of this work. Combining is not only an improvement of 3D visualisation which would allow to display simultaneously CAD or NDT data for example, but it consists in exploiting multisensor data collected via a variety of sophisticated techniques and presenting this information to the operator without overloading the operator/system capacities in order to reduce the uncertainty and to resolve the ambiguity inherent to mono method inspection. (author). 7 figs., 35 refs

  12. Drug-device combination products: regulatory landscape and market growth.

    Science.gov (United States)

    Bayarri, L

    2015-08-01

    Combination products are therapeutic and diagnostic products that combine drugs, devices and/or biological products, leading to safer and more effective treatments thanks to careful and precise drug targeting, local administration and individualized therapy. These technologies can especially benefit patients suffering from serious diseases and conditions such as cancer, heart disease, multiple sclerosis and diabetes, among others. On the other hand, drug-device combination products have also introduced a new dynamic in medical product development, regulatory approval and corporate interaction. Due to the increasing integration of drugs and devices observed in the latest generation of combination products, regulatory agencies have developed specific competences and regulations over the last decade. Manufacturers are required to fully understand the specific requirements in each country in order to ensure timely and accurate market access of new combination products, and the development of combination products involves a very specific pattern of interactions between manufacturers and regulatory agencies. The increased sophistication of the products brought to market over the last couple of decades has accentuated the need to develop drugs and devices collaboratively using resources from both industries, fostering the need of business partnering and technology licensing. This review will provide a global overview of the market trends, as well as (in the last section) an analysis of the drug-device combination products approved by the FDA during the latest 5 years. Copyright 2015 Prous Science, S.A.U. or its licensors. All rights reserved.

  13. Hadronic energy reconstruction in the CALICE combined calorimeter system

    Energy Technology Data Exchange (ETDEWEB)

    Israeli, Yasmine [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Collaboration: CALICE-D-Collaboration

    2016-07-01

    Future linear electron-positron colliders, ILC and CLIC, aim for precision measurements and discoveries beyond and complementary to the program of the LHC. For this purpose, detectors with the capability for sophisticated reconstruction of final states with energy resolutions substantially beyond the current state of the art are being designed. The CALICE collaboration develops highly granular calorimeters for future colliders, among them silicon-tungsten electromagnetic calorimeters and hadronic calorimeters with scintillators read out by SiPMs. Such a combined system was tested with hadrons at CERN as well as at Fermilab. In this contribution, we report on the energy reconstruction in the combined setup, which requires different intercalibration factors to account for the varying longitudinal sampling of sub-detectors. Software compensation methods are applied to improve the energy resolution and to compensate for the different energy deposit of hadronic and electromagnetic showers.

  14. Design of a multi-model observer-based estimator for Fault Detection and Isolation (FDI strategy: application to a chemical reactor

    Directory of Open Access Journals (Sweden)

    Y. Chetouani

    2008-12-01

    Full Text Available This study presents a FDI strategy for nonlinear dynamic systems. It shows a methodology of tackling the fault detection and isolation issue by combining a technique based on the residuals signal and a technique using the multiple Kalman filters. The usefulness of this combination is the on-line implementation of the set of models, which represents the normal mode and all dynamics of faults, if the statistical decision threshold on the residuals exceeds a fixed value. In other cases, one Extended Kalman Filter (EKF is enough to estimate the process state. After describing the system architecture and the proposed FDI methodology, we present a realistic application in order to show the technique's potential. An algorithm is described and applied to a chemical process like a perfectly stirred chemical reactor functioning in a semi-batch mode. The chemical reaction used is an oxido reduction one, the oxidation of sodium thiosulfate by hydrogen peroxide.

  15. Birth control pills - combination

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/patientinstructions/000655.htm Birth control pills - combination To use the sharing features on ... both progestin and estrogen. What Are Combination Birth Control Pills? Birth control pills help keep you from ...

  16. Modifiable Combining Functions

    OpenAIRE

    Cohen, Paul; Shafer, Glenn; Shenoy, Prakash P.

    2013-01-01

    Modifiable combining functions are a synthesis of two common approaches to combining evidence. They offer many of the advantages of these approaches and avoid some disadvantages. Because they facilitate the acquisition, representation, explanation, and modification of knowledge about combinations of evidence, they are proposed as a tool for knowledge engineers who build systems that reason under uncertainty, not as a normative theory of evidence.

  17. Regional impacts of climate change and atmospheric CO2 on future ocean carbon uptake: A multi-model linear feedback analysis

    OpenAIRE

    Roy Tilla; Bopp Laurent; Gehlen Marion; Schneider Birgitt; Cadule Patricia; Frölicher Thomas; Segschneider Jochen; Tijputra Jerry; Heinze Christoph; Joos Fortunat

    2011-01-01

    The increase in atmospheric CO2 over this century depends on the evolution of the oceanic air–sea CO2 uptake which will be driven by the combined response to rising atmospheric CO2 itself and climate change. Here the future oceanic CO2 uptake is simulated using an ensemble of coupled climate–carbon cycle models. The models are driven by CO2 emissions from historical data and the Special Report on Emissions Scenarios (SRES) A2 high emission scenario. A linear feedback analysis successfully sep...

  18. Pneumatic-Combustion Hybrid Engine: A Study of the Effect of the Valvetrain Sophistication on Pneumatic Modes Moteur hybride pneumatique: une étude de l’effet de la complexité de la distribution sur les modes pneumatiques

    Directory of Open Access Journals (Sweden)

    Brejaud P.

    2009-09-01

    Full Text Available Although internal combustion engines display high overall maximum global efficiencies, this potential cannot be fully exploited in automotive applications: in real conditions, the average engine load (and thus efficiency is quite low and the kinetic energy during a braking phase is lost. This work presents a hybrid pneumatic-combustion engine and the associated thermodynamic cycles, which is able to store and recover energy in the form of compressed air. The study focuses on the two major pneumatic modes: pneumatic pump mode and pneumatic motor mode. For each of them, three valvetrain technologies are considered: 4-stroke mode, 4-stroke mode with one camshaft disengaged, and 2-stroke fully variable. The concept can be adapted to SI or CI engines. In any case the valvetrain technology is the key to best fuel economy. A kinematic model of the charging valve’s actuator is introduced, and implemented in a quasi dimensional model of the pneumatic-combustion hybrid engine. Simulation results are presented for each pneumatic mode, for each valvetrain technology, in order to determine the best valve train configuration, and to show the impact of the kinematic valve actuator on the performance of the engine The tradeoffs between valvetrain sophistication and fuel economy will be presented for each case. Bien que le rendement total d’un moteur à combustion interne soit élevé, ce potentiel ne peut être pleinement exploité sur une automobile : dans les conditions réelles d’utilisation, la charge moteur moyenne (et donc le rendement est souvent faible. De plus, l’énergie cinétique en phase de freinage est totalement dissipée sous forme de chaleur. Cet article présente un concept de moteur hybride pneumatique, et les cycles thermodynamiques associés, capable de stocker de l’énergie (et de la réutiliser sous forme d’air comprimé. Le concept est adaptable au moteur à allumage commandé aussi bien qu’au moteur à allumage par

  19. Practical, general parser combinators

    NARCIS (Netherlands)

    A. Izmaylova (Anastasia); A. Afroozeh (Ali); T. van der Storm (Tijs)

    2016-01-01

    textabstractParser combinators are a popular approach to parsing where contextfree grammars are represented as executable code. However, conventional parser combinators do not support left recursion, and can have worst-case exponential runtime. These limitations hinder the expressivity and

  20. Trendwatch combining expert opinion

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Kornelis, M.; Pegge, S.M.; Galen, van M.A.

    2006-01-01

    In this study, focus is on a systematic way to detect future changes in trends that may effect the dynamics in the agro-food sector, and on the combination of opinions of experts. For the combination of expert opinions, the usefulness of multilevel models is investigated. Bayesian data analysis is

  1. Effective Nutritional Supplement Combinations

    Science.gov (United States)

    Cooke, Matt; Cribb, Paul J.

    Few supplement combinations that are marketed to athletes are supported by scientific evidence of their effectiveness. Quite often, under the rigor of scientific investigation, the patented combination fails to provide any greater benefit than a group given the active (generic) ingredient. The focus of this chapter is supplement combinations and dosing strategies that are effective at promoting an acute physiological response that may improve/enhance exercise performance or influence chronic adaptations desired from training. In recent years, there has been a particular focus on two nutritional ergogenic aids—creatine monohydrate and protein/amino acids—in combination with specific nutrients in an effort to augment or add to their already established independent ergogenic effects. These combinations and others are discussed in this chapter.

  2. Secondary combined suicide pact.

    Science.gov (United States)

    Jayanth, S H; Girish Chandra, Y P; Hugar, Basappa S; Kainoor, Sunilkumar

    2014-03-01

    This article reports a combined suicide pact, where in a young couple; a 26 year old male and a 20 year old female committed suicide by using two methods. The couple had resorted to hanging and self-immolation to prevent failure of single method alone. In secondary combined suicides, several other methods of suicide are tried after the first method chosen has failed. It is primary combined suicide only when two or more methods are used simultaneously. Both types of combined suicide by one individual is well reported in the literature whereas the same by two persons together is rare. In this report, the deceased were disappointed lovers, poor and the family members were against their marriage. The investigation of scene, methods employed to commit suicide, autopsy findings and the interview with their relatives altogether suggested that it was a secondary combined suicide pact. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  3. PUVA combination therapy.

    Science.gov (United States)

    Morison, W L

    1985-08-01

    Various adjunctive treatments are now frequently used in combination with PUVA therapy with the aims of limiting adverse effects, improving efficacy and decreasing the cost of treatment. In the management of psoriasis, PUVA plus retinoids, PUVA plus methotrexate and PUVA plus UVB phototherapy are the most frequently used combinations. PUVA plus topical corticosteroids and PUVA plus anthralin are also efficacious but adverse effects and poor acceptance by patients are limiting factors. Combinations of PUVA plus nitrogen mustard and ionizing radiation are used in mycosis fungoides to treat tumors and residual disease in secluded sites. In the management of photodermatoses with PUVA therapy, prednisone is often required to prevent exacerbation of disease. A combination of prednisone and PUVA therapy can also be useful in lichen planus and atopic eczema. The selection of a suitable combination treatment, will depend upon the preferences of the clinician, the disease being treated, and the characteristics of the patient.

  4. Evaluation of Hydrologic Simulations Developed Using Multi-Model Synthesis and Remotely-Sensed Data within a Portfolio of Calibration Strategies

    Science.gov (United States)

    Lafontaine, J.; Hay, L.; Markstrom, S. L.

    2016-12-01

    The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.

  5. Analysis of the impact of climate change on groundwater related hydrological fluxes: a multi-model approach including different downscaling methods

    Directory of Open Access Journals (Sweden)

    S. Stoll

    2011-01-01

    Full Text Available Climate change related modifications in the spatio-temporal distribution of precipitation and evapotranspiration will have an impact on groundwater resources. This study presents a modelling approach exploiting the advantages of integrated hydrological modelling and a broad climate model basis. We applied the integrated MIKE SHE model on a perialpine, small catchment in northern Switzerland near Zurich. To examine the impact of climate change we forced the hydrological model with data from eight GCM-RCM combinations showing systematic biases which are corrected by three different statistical downscaling methods, not only for precipitation but also for the variables that govern potential evapotranspiration. The downscaling methods are evaluated in a split sample test and the sensitivity of the downscaling procedure on the hydrological fluxes is analyzed. The RCMs resulted in very different projections of potential evapotranspiration and, especially, precipitation. All three downscaling methods reduced the differences between the predictions of the RCMs and all corrected predictions showed no future groundwater stress which can be related to an expected increase in precipitation during winter. It turned out that especially the timing of the precipitation and thus recharge is very important for the future development of the groundwater levels. However, the simulation experiments revealed the weaknesses of the downscaling methods which directly influence the predicted hydrological fluxes, and thus also the predicted groundwater levels. The downscaling process is identified as an important source of uncertainty in hydrological impact studies, which has to be accounted for. Therefore it is strongly recommended to test different downscaling methods by using verification data before applying them to climate model data.

  6. Optical Communications Channel Combiner

    Science.gov (United States)

    Quirk, Kevin J.; Quirk, Kevin J.; Nguyen, Danh H.; Nguyen, Huy

    2012-01-01

    NASA has identified deep-space optical communications links as an integral part of a unified space communication network in order to provide data rates in excess of 100 Mb/s. The distances and limited power inherent in a deep-space optical downlink necessitate the use of photon-counting detectors and a power-efficient modulation such as pulse position modulation (PPM). For the output of each photodetector, whether from a separate telescope or a portion of the detection area, a communication receiver estimates a log-likelihood ratio for each PPM slot. To realize the full effective aperture of these receivers, their outputs must be combined prior to information decoding. A channel combiner was developed to synchronize the log-likelihood ratio (LLR) sequences of multiple receivers, and then combines these into a single LLR sequence for information decoding. The channel combiner synchronizes the LLR sequences of up to three receivers and then combines these into a single LLR sequence for output. The channel combiner has three channel inputs, each of which takes as input a sequence of four-bit LLRs for each PPM slot in a codeword via a XAUI 10 Gb/s quad optical fiber interface. The cross-correlation between the channels LLR time series are calculated and used to synchronize the sequences prior to combining. The output of the channel combiner is a sequence of four-bit LLRs for each PPM slot in a codeword via a XAUI 10 Gb/s quad optical fiber interface. The unit is controlled through a 1 Gb/s Ethernet UDP/IP interface. A deep-space optical communication link has not yet been demonstrated. This ground-station channel combiner was developed to demonstrate this capability and is unique in its ability to process such a signal.

  7. Impact of West African Monsoon convective transport and lightning NOx production upon the upper tropospheric composition: a multi-model study

    Directory of Open Access Journals (Sweden)

    H. Schlager

    2010-06-01

    observations of flash frequency. Combined with comparisons of in-situ NO measurements we show that the models producing the highest amounts of LiNOx over Africa during the WAM (INCA and p-TOMCAT capture observed NO profiles with the best accuracy, although they both overestimate lightning activity over the Sahel.

  8. Enhancing global climate policy ambition towards a 1.5 °C stabilization: a short-term multi-model assessment

    Science.gov (United States)

    Vrontisi, Zoi; Luderer, Gunnar; Saveyn, Bert; Keramidas, Kimon; Reis Lara, Aleluia; Baumstark, Lavinia; Bertram, Christoph; Sytze de Boer, Harmen; Drouet, Laurent; Fragkiadakis, Kostas; Fricko, Oliver; Fujimori, Shinichiro; Guivarch, Celine; Kitous, Alban; Krey, Volker; Kriegler, Elmar; Broin, Eoin Ó.; Paroussos, Leonidas; van Vuuren, Detlef

    2018-04-01

    The Paris Agreement is a milestone in international climate policy as it establishes a global mitigation framework towards 2030 and sets the ground for a potential 1.5 °C climate stabilization. To provide useful insights for the 2018 UNFCCC Talanoa facilitative dialogue, we use eight state-of-the-art climate-energy-economy models to assess the effectiveness of the Intended Nationally Determined Contributions (INDCs) in meeting high probability 1.5 and 2 °C stabilization goals. We estimate that the implementation of conditional INDCs in 2030 leaves an emissions gap from least cost 2 °C and 1.5 °C pathways for year 2030 equal to 15.6 (9.0–20.3) and 24.6 (18.5–29.0) GtCO2eq respectively. The immediate transition to a more efficient and low-carbon energy system is key to achieving the Paris goals. The decarbonization of the power supply sector delivers half of total CO2 emission reductions in all scenarios, primarily through high penetration of renewables and energy efficiency improvements. In combination with an increased electrification of final energy demand, low-carbon power supply is the main short-term abatement option. We find that the global macroeconomic cost of mitigation efforts does not reduce the 2020–2030 annual GDP growth rates in any model more than 0.1 percentage points in the INDC or 0.3 and 0.5 in the 2 °C and 1.5 °C scenarios respectively even without accounting for potential co-benefits and avoided climate damages. Accordingly, the median GDP reductions across all models in 2030 are 0.4%, 1.2% and 3.3% of reference GDP for each respective scenario. Costs go up with increasing mitigation efforts but a fragmented action, as implied by the INDCs, results in higher costs per unit of abated emissions. On a regional level, the cost distribution is different across scenarios while fossil fuel exporters see the highest GDP reductions in all INDC, 2 °C and 1.5 °C scenarios.

  9. Wood-plastic combination

    International Nuclear Information System (INIS)

    Schaudy, R.

    1978-02-01

    A review on wood-plastic combinations is given including the production (wood and plastic component, radiation hardening, curing), the obtained properties, present applications and prospects for the future of these materials. (author)

  10. Resonant High Power Combiners

    CERN Document Server

    Langlois, Michel; Peillex-Delphe, Guy

    2005-01-01

    Particle accelerators need radio frequency sources. Above 300 MHz, the amplifiers mostly used high power klystrons developed for this sole purpose. As for military equipment, users are drawn to buy "off the shelf" components rather than dedicated devices. IOTs have replaced most klystrons in TV transmitters and find their way in particle accelerators. They are less bulky, easier to replace, more efficient at reduced power. They are also far less powerful. What is the benefit of very compact sources if huge 3 dB couplers are needed to combine the power? To alleviate this drawback, we investigated a resonant combiner, operating in TM010 mode, able to combine 3 to 5 IOTs. Our IOTs being able to deliver 80 kW C.W. apiece, combined power would reach 400 kW minus the minor insertion loss. Values for matching and insertion loss are given. The behavior of the system in case of IOT failure is analyzed.

  11. Assessment and economic valuation of air pollution impacts on human health over Europe and the United States as calculated by a multi-model ensemble in the framework of AQMEII3

    Science.gov (United States)

    Im, Ulas; Brandt, Jørgen; Geels, Camilla; Mantzius Hansen, Kaj; Heile Christensen, Jesper; Skou Andersen, Mikael; Solazzo, Efisio; Kioutsioukis, Ioannis; Alyuz, Ummugulsum; Balzarini, Alessandra; Baro, Rocio; Bellasio, Roberto; Bianconi, Roberto; Bieser, Johannes; Colette, Augustin; Curci, Gabriele; Farrow, Aidan; Flemming, Johannes; Fraser, Andrea; Jimenez-Guerrero, Pedro; Kitwiroon, Nutthida; Liang, Ciao-Kai; Nopmongcol, Uarporn; Pirovano, Guido; Pozzoli, Luca; Prank, Marje; Rose, Rebecca; Sokhi, Ranjeet; Tuccella, Paolo; Unal, Alper; Garcia Vivanco, Marta; West, Jason; Yarwood, Greg; Hogrefe, Christian; Galmarini, Stefano

    2018-04-01

    The impact of air pollution on human health and the associated external costs in Europe and the United States (US) for the year 2010 are modeled by a multi-model ensemble of regional models in the frame of the third phase of the Air Quality Modelling Evaluation International Initiative (AQMEII3). The modeled surface concentrations of O3, CO, SO2 and PM2.5 are used as input to the Economic Valuation of Air Pollution (EVA) system to calculate the resulting health impacts and the associated external costs from each individual model. Along with a base case simulation, additional runs were performed introducing 20 % anthropogenic emission reductions both globally and regionally in Europe, North America and east Asia, as defined by the second phase of the Task Force on Hemispheric Transport of Air Pollution (TF-HTAP2). Health impacts estimated by using concentration inputs from different chemistry-transport models (CTMs) to the EVA system can vary up to a factor of 3 in Europe (12 models) and the United States (3 models). In Europe, the multi-model mean total number of premature deaths (acute and chronic) is calculated to be 414 000, while in the US, it is estimated to be 160 000, in agreement with previous global and regional studies. The economic valuation of these health impacts is calculated to be EUR 300 billion and 145 billion in Europe and the US, respectively. A subset of models that produce the smallest error compared to the surface observations at each time step against an all-model mean ensemble results in increase of health impacts by up to 30 % in Europe, while in the US, the optimal ensemble mean led to a decrease in the calculated health impacts by ˜ 11 %. A total of 54 000 and 27 500 premature deaths can be avoided by a 20 % reduction of global anthropogenic emissions in Europe and the US, respectively. A 20 % reduction of North American anthropogenic emissions avoids a total of ˜ 1000 premature deaths in Europe and 25 000 total premature deaths in the

  12. Assessment and economic valuation of air pollution impacts on human health over Europe and the United States as calculated by a multi-model ensemble in the framework of AQMEII3

    Directory of Open Access Journals (Sweden)

    U. Im

    2018-04-01

    Full Text Available The impact of air pollution on human health and the associated external costs in Europe and the United States (US for the year 2010 are modeled by a multi-model ensemble of regional models in the frame of the third phase of the Air Quality Modelling Evaluation International Initiative (AQMEII3. The modeled surface concentrations of O3, CO, SO2 and PM2.5 are used as input to the Economic Valuation of Air Pollution (EVA system to calculate the resulting health impacts and the associated external costs from each individual model. Along with a base case simulation, additional runs were performed introducing 20 % anthropogenic emission reductions both globally and regionally in Europe, North America and east Asia, as defined by the second phase of the Task Force on Hemispheric Transport of Air Pollution (TF-HTAP2. Health impacts estimated by using concentration inputs from different chemistry–transport models (CTMs to the EVA system can vary up to a factor of 3 in Europe (12 models and the United States (3 models. In Europe, the multi-model mean total number of premature deaths (acute and chronic is calculated to be 414 000, while in the US, it is estimated to be 160 000, in agreement with previous global and regional studies. The economic valuation of these health impacts is calculated to be EUR 300 billion and 145 billion in Europe and the US, respectively. A subset of models that produce the smallest error compared to the surface observations at each time step against an all-model mean ensemble results in increase of health impacts by up to 30 % in Europe, while in the US, the optimal ensemble mean led to a decrease in the calculated health impacts by  ∼  11 %. A total of 54 000 and 27 500 premature deaths can be avoided by a 20 % reduction of global anthropogenic emissions in Europe and the US, respectively. A 20 % reduction of North American anthropogenic emissions avoids a total of  ∼  1000 premature

  13. Multilevel Regulation of Bacterial Gene Expression with the Combined STAR and Antisense RNA System.

    Science.gov (United States)

    Lee, Young Je; Kim, Soo-Jung; Moon, Tae Seok

    2018-03-16

    Synthetic small RNA regulators have emerged as a versatile tool to predictably control bacterial gene expression. Owing to their simple design principles, small size, and highly orthogonal behavior, these engineered genetic parts have been incorporated into genetic circuits. However, efforts to achieve more sophisticated cellular functions using RNA regulators have been hindered by our limited ability to integrate different RNA regulators into complex circuits. Here, we present a combined RNA regulatory system in Escherichia coli that uses small transcription activating RNA (STAR) and antisense RNA (asRNA) to activate or deactivate target gene expression in a programmable manner. Specifically, we demonstrated that the activated target output by the STAR system can be deactivated by expressing two different types of asRNAs: one binds to and sequesters the STAR regulator, affecting the transcription process, while the other binds to the target mRNA, affecting the translation process. We improved deactivation efficiencies (up to 96%) by optimizing each type of asRNA and then integrating the two optimized asRNAs into a single circuit. Furthermore, we demonstrated that the combined STAR and asRNA system can control gene expression in a reversible way and can regulate expression of a gene in the genome. Lastly, we constructed and simultaneously tested two A AND NOT B logic gates in the same cell to show sophisticated multigene regulation by the combined system. Our approach establishes a methodology for integrating multiple RNA regulators to rationally control multiple genes.

  14. Changes in extremely hot days under stabilized 1.5 and 2.0 °C global warming scenarios as simulated by the HAPPI multi-model ensemble

    Directory of Open Access Journals (Sweden)

    M. Wehner

    2018-03-01

    Full Text Available The half a degree additional warming, prognosis and projected impacts (HAPPI experimental protocol provides a multi-model database to compare the effects of stabilizing anthropogenic global warming of 1.5 °C over preindustrial levels to 2.0 °C over these levels. The HAPPI experiment is based upon large ensembles of global atmospheric models forced by sea surface temperature and sea ice concentrations plausible for these stabilization levels. This paper examines changes in extremes of high temperatures averaged over three consecutive days. Changes in this measure of extreme temperature are also compared to changes in hot season temperatures. We find that over land this measure of extreme high temperature increases from about 0.5 to 1.5 °C over present-day values in the 1.5 °C stabilization scenario, depending on location and model. We further find an additional 0.25 to 1.0 °C increase in extreme high temperatures over land in the 2.0 °C stabilization scenario. Results from the HAPPI models are consistent with similar results from the one available fully coupled climate model. However, a complicating factor in interpreting extreme temperature changes across the HAPPI models is their diversity of aerosol forcing changes.

  15. Changes in extremely hot days under stabilized 1.5 and 2.0 °C global warming scenarios as simulated by the HAPPI multi-model ensemble

    Science.gov (United States)

    Wehner, Michael; Stone, Dáithí; Mitchell, Dann; Shiogama, Hideo; Fischer, Erich; Graff, Lise S.; Kharin, Viatcheslav V.; Lierhammer, Ludwig; Sanderson, Benjamin; Krishnan, Harinarayan

    2018-03-01

    The half a degree additional warming, prognosis and projected impacts (HAPPI) experimental protocol provides a multi-model database to compare the effects of stabilizing anthropogenic global warming of 1.5 °C over preindustrial levels to 2.0 °C over these levels. The HAPPI experiment is based upon large ensembles of global atmospheric models forced by sea surface temperature and sea ice concentrations plausible for these stabilization levels. This paper examines changes in extremes of high temperatures averaged over three consecutive days. Changes in this measure of extreme temperature are also compared to changes in hot season temperatures. We find that over land this measure of extreme high temperature increases from about 0.5 to 1.5 °C over present-day values in the 1.5 °C stabilization scenario, depending on location and model. We further find an additional 0.25 to 1.0 °C increase in extreme high temperatures over land in the 2.0 °C stabilization scenario. Results from the HAPPI models are consistent with similar results from the one available fully coupled climate model. However, a complicating factor in interpreting extreme temperature changes across the HAPPI models is their diversity of aerosol forcing changes.

  16. Application of the North American Multi-Model Ensemble to seasonal water supply forecasting in the Great Lakes basin through the use of the Great Lakes Seasonal Climate Forecast Tool

    Science.gov (United States)

    Gronewold, A.; Apps, D.; Fry, L. M.; Bolinger, R.

    2017-12-01

    The U.S. Army Corps of Engineers (USACE) contribution to the internationally coordinated 6-month forecast of Great Lakes water levels relies on several water supply models, including a regression model relating a coming month's water supply to past water supplies, previous months' precipitation and temperature, and forecasted precipitation and temperature. Probabilistic forecasts of precipitation and temperature depicted in the Climate Prediction Center's seasonal outlook maps are considered to be standard for use in operational forecasting for seasonal time horizons, and have provided the basis for computing a coming month's precipitation and temperature for use in the USACE water supply regression models. The CPC outlook maps are a useful forecast product offering insight into interpretation of climate models through the prognostic discussion and graphical forecasts. However, recent evolution of USACE forecast procedures to accommodate automated data transfer and manipulation offers a new opportunity for direct incorporation of ensemble climate forecast data into probabilistic outlooks of water supply using existing models that have previously been implemented in a deterministic fashion. We will present results from a study investigating the potential for applying data from the North American Multi-Model Ensemble to operational water supply forecasts. The use of NMME forecasts is facilitated by a new, publicly available, Great Lakes Seasonal Climate Forecast Tool that provides operational forecasts of monthly average temperatures and monthly total precipitation summarized for each lake basin.

  17. Combining in Theory Building

    Directory of Open Access Journals (Sweden)

    Uolevi Lehtinen

    2013-07-01

    Full Text Available The objectives of this article strive to describe the idea and rationale of combining i.e. why, when and how to develop theoretically new combined approaches. Then business administration, especially marketing is used as a theoretical and empirical illustrative area. Methodology is inductive and deductive logic and in the empirical examples surveys, case analysis and utilization of secondary data. This article introduce a new promising way, in the long run, to develop new comprehensive approaches and even paradigms for different disciplines, subdisciplines and branches of subdiciplines. Therefore, the ultimate message of the article is to challenge the researchers to put the idea and rationale for combing to the test in their own research field and to build new combined and comprehensive approaches if possible in the field. This message is rather multidisciplinary concerning for example economics, social sciences and political sciences in addition to business administration.

  18. Combinators for Paraconsistent Attitudes

    DEFF Research Database (Denmark)

    Villadsen, Jørgen

    2001-01-01

    In order to analyse the semantics of natural language sentences a translation into a partial type logic using lexical and logical combinators is presented. The sentences cover a fragment of English with propositional attitudes like knowledge, belief and assertion. A combinator is a closed term...... used for embedded sentences expressing propositional attitudes, thereby allowing for inconsistency without explosion (also called paraconsistency), and is based on a few key equalities for the connectives giving four truth values (truth, falsehood, and undefinedness with negative and positive polarity...

  19. Combined PET/MRI

    DEFF Research Database (Denmark)

    Bailey, D. L.; Pichler, B. J.; Gückel, B.

    2015-01-01

    This paper summarises key themes and discussions from the 4th international workshop dedicated to the advancement of the technical, scientific and clinical applications of combined positron emission tomography (PET)/magnetic resonance imaging (MRI) systems that was held in Tübingen, Germany, from...

  20. Combine Harvester Simulator

    DEFF Research Database (Denmark)

    Vilmann, Ole; Sørlie, James Arnold

    1999-01-01

    A simulator for training pilots in the operation of a modern high-tech combine harvester is presented. The new simulator application is based on DMI´s well-known DMS maritime simulator architecture. Two major challenges have been encountered in the development of the simulator: 1) interfacing the...

  1. ILSE combiner study

    International Nuclear Information System (INIS)

    Hahn, K.

    1994-03-01

    In a heavy ion inertial fusion (HIF) driver, the beam energy and current are increased several orders of magnitude from the injector to the final focus system. At low and high energy stages of the driver, electrostatic and magnetic focusing transport channels, respectively, can be used. At the electric-to-magnetic transition point, the beams may be combined to reduce the transverse dimensions of the system, which could have significant impact on the driver cost. In a presently envisioned combiner, four beams are brought together transversely into a single transport channel. A matching section follows the combiner in order to provide a smooth transition to the subsequent magnetic transport channel. This report summarizes a conceptual design study of possible combiner configurations for the proposed Introduction Linac Systems Experiment (ILSE). The conceptual design study includes subjects such as the expected technical difficulties, predicted emittance growth, particle loss, effect of geometric and chromatic aberrations, and the sensitivity of emittance growth on the initial beam position and angle errors

  2. Combined-cycle plants

    International Nuclear Information System (INIS)

    Valenti, M.

    1991-01-01

    This paper reports that as tougher emissions standards take hold throughout the industrialized world, manufacturers such as GE, Siemens, Foster Wheeler, and Asea Brown Boveri are designing advanced combined-cycle equipment that offers improved environmental performance without sacrificing power efficiency

  3. Multi-model MPC with output feedback

    Directory of Open Access Journals (Sweden)

    J. M. Perez

    2014-03-01

    Full Text Available In this work, a new formulation is presented for the model predictive control (MPC of a process system that is represented by a finite set of models, each one corresponding to a different operating point. The general case is considered of systems with stable and integrating outputs in closed-loop with output feedback. For this purpose, the controller is based on a non-minimal order model where the state is built with the measured outputs and the manipulated inputs of the control system. Therefore, the state can be considered as perfectly known and, consequently, there is no need to include a state observer in the control loop. This property of the proposed modeling approach is convenient to extend previous stability results of the closed loop system with robust MPC controllers based on state feedback. The controller proposed here is based on the solution of two optimization problems that are solved sequentially at the same time step. The method is illustrated with a simulated example of the process industry. The rigorous simulation of the control of an adiabatic flash of a multi-component hydrocarbon mixture illustrates the application of the robust controller. The dynamic simulation of this process is performed using EMSO - Environment Model Simulation and Optimization. Finally, a comparison with a linear MPC using a single model is presented.

  4. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    Science.gov (United States)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  5. Propagating Class and Method Combination

    DEFF Research Database (Denmark)

    Ernst, Erik

    1999-01-01

    number of implicit combinations. For example, it is possible to specify separate aspects of a family of classes, and then combine several aspects into a full-fledged class family. The combination expressions would explicitly combine whole-family aspects, and by propagation implicitly combine the aspects...

  6. Combined radiotherapy-chemotherapy

    International Nuclear Information System (INIS)

    Steel, G.G.

    1989-01-01

    This paper presents the clinically confirmed benefits of combined chemotherapy-radiotherapy. They have been found in a small group of diseases that respond to chemotherapy alone. According to the author, only when a drug or drug combination has the ability to eradicate occult disease or substantially to reduce the size of objectively measurable disease is there likely to be an demonstrable benefit from its use in conjunction with radiotherapy. It is the author's belief that the immediate future lies in selecting drugs and patients in which a good chemotherapeutic response can be expected, avoiding drugs that seriously enhance radiation damage to normal tissues and keeping drug and radiation treatments far enough apart in time to minimize interactions

  7. Combination Chemotherapy for Influenza

    Directory of Open Access Journals (Sweden)

    Robert G. Webster

    2010-07-01

    Full Text Available The emergence of pandemic H1N1 influenza viruses in April 2009 and the continuous evolution of highly pathogenic H5N1 influenza viruses underscore the urgency of novel approaches to chemotherapy for human influenza infection. Anti-influenza drugs are currently limited to the neuraminidase inhibitors (oseltamivir and zanamivir and to M2 ion channel blockers (amantadine and rimantadine, although resistance to the latter class develops rapidly. Potential targets for the development of new anti-influenza agents include the viral polymerase (and endonuclease, the hemagglutinin, and the non-structural protein NS1. The limitations of monotherapy and the emergence of drug-resistant variants make combination chemotherapy the logical therapeutic option. Here we review the experimental data on combination chemotherapy with currently available agents and the development of new agents and therapy targets.

  8. Combined XRD and XAS

    International Nuclear Information System (INIS)

    Ehrlich, S.N.; Hanson, J.C.; Lopez Camara, A.; Barrio, L.; Estrella, M.; Zhou, G.; Si, R.; Khalid, S.; Wang, Q.

    2011-01-01

    X-ray diffraction (XRD) and X-ray absorption fine structure (XAFS) are complementary techniques for investigating the structure of materials. XRD probes long range order and XAFS probes short range order. We have combined the two techniques at one synchrotron beamline, X18A at the NSLS, allowing samples to be studied in a single experiment. This beamline will allow for coordinated measurements of local and long range structural changes in chemical transformations and phase transitions using both techniques.

  9. Transfer function combinations

    KAUST Repository

    Zhou, Liang; Schott, Mathias; Hansen, Charles

    2012-01-01

    Direct volume rendering has been an active area of research for over two decades. Transfer function design remains a difficult task since current methods, such as traditional 1D and 2D transfer functions, are not always effective for all data sets. Various 1D or 2D transfer function spaces have been proposed to improve classification exploiting different aspects, such as using the gradient magnitude for boundary location and statistical, occlusion, or size metrics. In this paper, we present a novel transfer function method which can provide more specificity for data classification by combining different transfer function spaces. In this work, a 2D transfer function can be combined with 1D transfer functions which improve the classification. Specifically, we use the traditional 2D scalar/gradient magnitude, 2D statistical, and 2D occlusion spectrum transfer functions and combine these with occlusion and/or size-based transfer functions to provide better specificity. We demonstrate the usefulness of the new method by comparing to the following previous techniques: 2D gradient magnitude, 2D occlusion spectrum, 2D statistical transfer functions and 2D size based transfer functions. © 2012 Elsevier Ltd.

  10. Transfer function combinations

    KAUST Repository

    Zhou, Liang

    2012-10-01

    Direct volume rendering has been an active area of research for over two decades. Transfer function design remains a difficult task since current methods, such as traditional 1D and 2D transfer functions, are not always effective for all data sets. Various 1D or 2D transfer function spaces have been proposed to improve classification exploiting different aspects, such as using the gradient magnitude for boundary location and statistical, occlusion, or size metrics. In this paper, we present a novel transfer function method which can provide more specificity for data classification by combining different transfer function spaces. In this work, a 2D transfer function can be combined with 1D transfer functions which improve the classification. Specifically, we use the traditional 2D scalar/gradient magnitude, 2D statistical, and 2D occlusion spectrum transfer functions and combine these with occlusion and/or size-based transfer functions to provide better specificity. We demonstrate the usefulness of the new method by comparing to the following previous techniques: 2D gradient magnitude, 2D occlusion spectrum, 2D statistical transfer functions and 2D size based transfer functions. © 2012 Elsevier Ltd.

  11. A coal combine

    Energy Technology Data Exchange (ETDEWEB)

    Wlachovsky, I; Bartos, J

    1980-02-15

    A design is presented for a coal combine, equipped with two drum operational units, on whose both ends of the upper surface of the body, two coal saws are mounted with the help of a lever system. These saws, found in an operational position, form a gap in the block of the coal block, which is not embraced by the drum operational unit. The coal block, found between the gap and the support, falls down onto the longwall scraper conveyor. The lever system of each coal saw is controlled by two hydraulic jacks. One of the jacks is mounted vertically on the facial wall of the body of the combine and is used for the hoisting for the required height of the horizontal arm of the lever, reinforced by one end in the hinge on the body of the combine. On the ''free'' end of that lever, a coal saw is mounted in a hinge-like fashion and which is connected by the hydraulic jack to the horizontal arm of the lever system. This hydraulic jack is used for the clamping of the coal saw to the face.

  12. COMBINE/PC - a portable neutron spectrum and cross-section generation program

    International Nuclear Information System (INIS)

    Nigg, D.W.; Grimesey, R.A.; Curtis, R.L.

    1990-01-01

    Use of personal computers and engineering workstations for complex scientific computations has expanded rapidly in the past few years. This trend is expected to continue in the future with the introduction of increasingly sophisticated microprocessors and microcomputer systems. In response to this, an integrated system of neutronics and radiation transport software suitable for operation in an IBM personal computer (PC)-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past 3 years. A key component of this system will be module to produce application-specific multigroup cross-section libraries that can be used in various neutron transport and diffusion theory code modules. This software module, referred to as COMBINE/PC, was recently completed at INEL and is the subject of this paper. COMBINE/PC was developed to provide an ENDF/B-based neutron cross-section generation capability of sufficient sophistication to handle a wide variety of practical fission and fusion-related applications while maintaining a compact machine-independent structure

  13. Climatological attribution of wind power ramp events in East Japan and their probabilistic forecast based on multi-model ensembles downscaled by analog ensemble using self-organizing maps

    Science.gov (United States)

    Ohba, Masamichi; Nohara, Daisuke; Kadokura, Shinji

    2016-04-01

    Severe storms or other extreme weather events can interrupt the spin of wind turbines in large scale that cause unexpected "wind ramp events". In this study, we present an application of self-organizing maps (SOMs) for climatological attribution of the wind ramp events and their probabilistic prediction. The SOM is an automatic data-mining clustering technique, which allows us to summarize a high-dimensional data space in terms of a set of reference vectors. The SOM is applied to analyze and connect the relationship between atmospheric patterns over Japan and wind power generation. SOM is employed on sea level pressure derived from the JRA55 reanalysis over the target area (Tohoku region in Japan), whereby a two-dimensional lattice of weather patterns (WPs) classified during the 1977-2013 period is obtained. To compare with the atmospheric data, the long-term wind power generation is reconstructed by using a high-resolution surface observation network AMeDAS (Automated Meteorological Data Acquisition System) in Japan. Our analysis extracts seven typical WPs, which are linked to frequent occurrences of wind ramp events. Probabilistic forecasts to wind power generation and ramps are conducted by using the obtained SOM. The probability are derived from the multiple SOM lattices based on the matching of output from TIGGE multi-model global forecast to the WPs on the lattices. Since this method effectively takes care of the empirical uncertainties from the historical data, wind power generation and ramp is probabilistically forecasted from the forecasts of global models. The predictability skill of the forecasts for the wind power generation and ramp events show the relatively good skill score under the downscaling technique. It is expected that the results of this study provides better guidance to the user community and contribute to future development of system operation model for the transmission grid operator.

  14. Belladonna Alkaloid Combinations and Phenobarbital

    Science.gov (United States)

    Donnatal® Elixir (as a combination product containing Atropine, Hyoscyamine, Phenobarbital, Scopolamine) ... PB Hyos® Elixir (as a combination product containing Atropine, Hyoscyamine, Phenobarbital, Scopolamine)

  15. Coherent laser beam combining

    CERN Document Server

    Brignon, Arnaud

    2013-01-01

    Recently, the improvement of diode pumping in solid state lasers and the development of double clad fiber lasers have allowed to maintain excellent laser beam quality with single mode fibers. However, the fiber output power if often limited below a power damage threshold. Coherent laser beam combining (CLBC) brings a solution to these limitations by identifying the most efficient architectures and allowing for excellent spectral and spatial quality. This knowledge will become critical for the design of the next generation high-power lasers and is of major interest to many industrial, environme

  16. Combined radiochemotherapy. Review

    Energy Technology Data Exchange (ETDEWEB)

    Konecny, M; Mechl, Z [Onkologicky Ustav, Brno (Czechoslovakia). Betatronove Pracoviste

    1978-09-01

    Physical, chemical, biochemical and biological modifications are described of the radiation reaction. The biochemical modification with antimetabolites has so far been the one most frequently used in clinical oncology. It has not yet been clarified whether treatment should begin with irradiation or chemotherapy. Conclusions are presented of the study of simultaneous chemotherapy and radiotherapy applications, ie., the attempt at synchronization of the tumor population. The present-time existence of a great number of combined treatment plans is more a consequence of empirical data which have not yet been clinically confirmed.

  17. Structural load combinations

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Ellingwood, B.; Shinozuka, M.

    1985-01-01

    This paper presents the latest results of the program entitled, ''Probability Based Load Combinations For Design of Category I Structures''. In FY 85, a probability-based reliability analysis method has been developed to evaluate safety of shear wall structures. The shear walls are analyzed using stick models with beam elements and may be subjected to dead load, live load and in-plane eqrthquake. Both shear and flexure limit states are defined analytically. The limit state probabilities can be evaluated on the basis of these limit states. Utilizing the reliability analysis method mentioned above, load combinations for the design of shear wall structures have been established. The proposed design criteria are in the load and resistance factor design (LRFD) format. In this study, the resistance factors for shear and flexure and load factors for dead and live loads are preassigned, while the load factor for SSE is determined for a specified target limit state probability of 1.0 x 10 -6 or 1.0 x 10 -5 during a lifetime of 40 years. 23 refs., 9 tabs

  18. Combining Boosted Global

    Directory of Open Access Journals (Sweden)

    Szidónia Lefkovits

    2011-06-01

    Full Text Available The domain of object detection presents a wide range of interest due to its numerous application possibilities especially real time applications. All of them require high detection rate correlated with short processing time. One of the most efficient systems, working with visual information, were presented in the publication of Viola et al. [1], [2].This detection system uses classifiers based on Haar-like separating features combined with the AdaBoost learning algorithm. The most important bottleneck of the system is the big number of false detections at high hit rate. In this paper we propose to overcome this disadvantage by using specialized parts classifiers. This aim comes from the observation that the target object does not resemble the false detections at all.The reason of this fact is the coding manner of Haar-like features which attend to handle image patches and neglect the edges and contours. In order to obtain a more robust classifier, a global aspect method is combined with a part-based method, having the goal to improve the performance of the detector without significant increase of the detection time.

  19. Structural load combinations

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Ellingwood, B.; Shinozuka, M.

    1986-01-01

    This paper presents the latest results of the program entitled, ''Probability Based Load Combinations For Design of Category I Structures''. In FY 85, a probability-based reliability analysis method has been developed to evaluate safety of shear wall structures. The shear walls are analyzed using stick models with beam elements and may be subjected to dead load, live load and in-plane earthquake. Both shear and flexure limit states are defined analytically. The limit state probabilities can be evaluated on the basis of these limit states. Utilizing the reliability analysis method mentioned above, load combinations for the design of shear wall structures have been established. The proposed design criteria are in the load and resistance factor design (LRFD) format. In this study, the resistance factors for shear and flexure and load factors for dead and live loads are preassigned, while the load factor for SSE is determined for a specified target limit state probability of 1.0 x 10 -6 or 1.0 x 10 -5 during a lifetime of 40 years

  20. Combined Heat and Power

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-07-01

    At their 2007 Summit in Heiligendamm, G8 leaders called on countries to 'adopt instruments and measures to significantly increase the share of combined heat and power (CHP) in the generation of electricity.' As a result, energy, economic, environmental and utility regulators are looking for tools and information to understand the potential of CHP and to identify appropriate policies for their national circumstances. This report forms the first part of the response. It includes answers to policy makers' questions about the potential economic, energy and environmental benefits of an increased policy commitment to CHP. It also includes for the first time integrated IEA data on global CHP installations, and analyses the benefits of increased CHP investment in the G8+5 countries. A companion report will be produced later in 2008 to document best practice policy approaches that have been used to expand the use of CHP in a variety of countries.

  1. Biomass Gasification Combined Cycle

    Energy Technology Data Exchange (ETDEWEB)

    Judith A. Kieffer

    2000-07-01

    Gasification combined cycle continues to represent an important defining technology area for the forest products industry. The ''Forest Products Gasification Initiative'', organized under the Industry's Agenda 2020 technology vision and supported by the DOE ''Industries of the Future'' program, is well positioned to guide these technologies to commercial success within a five-to ten-year timeframe given supportive federal budgets and public policy. Commercial success will result in significant environmental and renewable energy goals that are shared by the Industry and the Nation. The Battelle/FERCO LIVG technology, which is the technology of choice for the application reported here, remains of high interest due to characteristics that make it well suited for integration with the infrastructure of a pulp production facility. The capital cost, operating economics and long-term demonstration of this technology area key input to future economically sustainable projects and must be verified by the 200 BDT/day demonstration facility currently operating in Burlington, Vermont. The New Bern application that was the initial objective of this project is not currently economically viable and will not be implemented at this time due to several changes at and around the mill which have occurred since the inception of the project in 1995. The analysis shows that for this technology, and likely other gasification technologies as well, the first few installations will require unique circumstances, or supportive public policies, or both to attract host sites and investors.

  2. Combined approach for gynecomastia

    Directory of Open Access Journals (Sweden)

    El-Sabbagh, Ahmed Hassan

    2016-02-01

    Full Text Available Background: Gynecomastia is a deformity of male chest. Treatment of gynecomastia varied from direct surgical excision to other techniques (mainly liposuction to a combination of both. Skin excision is done according to the grade. In this study, experience of using liposuction adjuvant to surgical excision was described. Patients and methods: Between September 2012 and April 2015, a total of 14 patients were treated with liposuction and surgical excision through a periareolar incision. Preoperative evaluation was done in all cases to exclude any underlying cause of gynecomastia. Results: All fourteen patients were treated bilaterally (28 breast tissues. Their ages ranged between 13 and 33 years. Two patients were classified as grade I, and four as grade IIa, IIb or III, respectively. The first showed seroma. Partial superficial epidermolysis of areola occurred in 2 cases. Superficial infection of incision occurred in one case and was treated conservatively. Conclusion: All grades of gynecomastia were managed by the same approach. Skin excision was added to a patient that had severe skin excess with limited activity and bad skin complexion. No cases required another setting or asked for 2 opinion.

  3. Combined approach for gynecomastia.

    Science.gov (United States)

    El-Sabbagh, Ahmed Hassan

    2016-01-01

    Gynecomastia is a deformity of male chest. Treatment of gynecomastia varied from direct surgical excision to other techniques (mainly liposuction) to a combination of both. Skin excision is done according to the grade. In this study, experience of using liposuction adjuvant to surgical excision was described. Between September 2012 and April 2015, a total of 14 patients were treated with liposuction and surgical excision through a periareolar incision. Preoperative evaluation was done in all cases to exclude any underlying cause of gynecomastia. All fourteen patients were treated bilaterally (28 breast tissues). Their ages ranged between 13 and 33 years. Two patients were classified as grade I, and four as grade IIa, IIb or III, respectively. The first 3 patients showed seroma. Partial superficial epidermolysis of areola occurred in 2 cases. Superficial infection of incision occurred in one case and was treated conservatively. All grades of gynecomastia were managed by the same approach. Skin excision was added to a patient that had severe skin excess with limited activity and bad skin complexion. No cases required another setting or asked for 2(nd) opinion.

  4. Challenges of Combining Perspectives.

    Science.gov (United States)

    Sundvall, Maria; Titelman, David; Bäärnhielm, Sofie

    2018-02-23

    Asylum seekers have increased risk of suicide and suicidal behavior, with differences related to origin, gender, and age. There are barriers to communication in clinical encounters between asylum seekers and clinicians. There is insufficient knowledge about how communication in the clinical encounter affects the suicide risk in female asylum seekers. To explore the documented communication between female asylum-seeking suicide attempters and clinicians and how it affects treatment. The medical records of 18 asylum-seeking women who had attempted suicide were analyzed with content analysis. Communication between patients and clinicians was affected by: the unbearable realities of the women; difficulties for clinicians in decoding languages of distress, and understanding trauma and subjective meanings of suicide; challenges of combining patients' and clinicians' perspectives; and a sense of shared powerlessness. The medical records did not give direct access to the patient's experience, only to the patient as documented by the clinician. The results suggest that clinicians working with asylum seekers who have attempted suicide need to develop an understanding of social and cultural factors and of trauma issues. A question for further study is how an enhanced integration of context and subjectivity in psychiatric practice would equip clinicians for the specific challenges encountered.

  5. A sophisticated programmable miniaturised pump for insulin delivery.

    Science.gov (United States)

    Klein, J C; Slama, G

    1980-09-01

    We have conceived a truly pre-programmable infusion system usable for intravenous administration of insulin in diabetic subjects. The original system has been built into a small, commercially available, syringe-pump of which only the case and the mechanical parts have been kept. The computing until has a timer, a programmable memory of 512 words by 8 bits and a digital-to-frequency converter to run the motor which drives the syringe. The memory contains 8 profiles of insulin injections stored in digital form over 64 words. Each profile is selected by the patient before eating according to the carbohydrate content of the planned meal and last about two hours, starting from and returning to the basal rate of insulin, at which it remains until next profile selection. Amount, profiles and duration of insulin injection are either mean values deduced from previous studies with a closed-loop artificial pancreas or personally fitted values; they are stored in an instantly replaceable memory cell. This device allows the patient to choose the time, nature and amount of his food intake.

  6. Sophistication of burnup analysis system for fast reactor

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Hirai, Yasushi; Hyoudou, Hideaki; Tatsumi, Masahiro

    2010-02-01

    Improvement on prediction accuracy for neutronics property of fast reactor cores is one of the most important study domains in terms of both achievement of high economical plant efficiency based on reasonably advanced designs and increased reliability and safety margins. In former study, considerable improvement on prediction accuracy in neutronics design has been achieved in the development of the unified constants library as a fruit of a series of critical experiments such as JUPITER in application of the reactor constant adjustments. For design of fast reactor cores, however, improvement of not only static properties but also burnup properties is very important. For such purpose, it is necessary to improve the prediction accuracy on burnup properties using actual burnup data of 'JOYO' and 'MONJU', experimental and prototype fast reactors. Recently, study on effective burnup method for minor actinides becomes important theme. However, there is a problem that analysis work tends to become inefficient for lack of functionality suitable for analysis of composition change due to burnup since the conventional analysis system is targeted to critical assembly systems. Therefore development of burnup analysis system for fast reactors with modularity and flexibility is being done that would contribute to actual core design work and improvement of prediction accuracy. In the previous research, we have developed a prototype system which has functions of performing core and burnup calculations using given constant files (PDS files) and information based on simple and easy user input data. It has also functions of fuel shuffling which is indispensable for production systems. In the present study, we implemented functions for cell calculations and burnup calculations. With this, whole steps in analysis can be carried out with only this system. In addition, we modified the specification of user input to improve the convenience of this system. Since implementations being done so far had some bottlenecks to be resolved; we have realized the improvement on efficiency and amount of memory usage with modification on actual implementation. (author)

  7. Information flows at OS level unmask sophisticated Android malware

    OpenAIRE

    Viet Triem Tong , Valérie; Trulla , Aurélien; Leslous , Mourad; Lalande , Jean-François

    2017-01-01

    International audience; The detection of new Android malware is far from being a relaxing job. Indeed, each day new Android malware appear in the market and it remains difficult to quickly identify them. Unfortunately users still pay the lack of real efficient tools able to detect zero day malware that have no known signature. The difficulty is that most of the existing approaches rely on static analysis coupled with the ability of malware to hide their malicious code. Thus, we believe that i...

  8. Hi-tech in space - Rosetta - a space sophisticate

    Science.gov (United States)

    2004-02-01

    The European Space Agency’s Rosetta mission will be getting under way in February 2004. The Rosetta spacecraft will be pairing up with Comet 67P/Churyumov-Gerasimenko and accompanying it on its journey, investigating the comet’s composition and the dynamic processes at work as it flies sunwards. The spacecraft will even deposit a lander on the comet. “This will be our first direct contact with the surface of a comet,” said Dr Manfred Warhaut, Operations Manager for the Rosetta mission at ESA's European Space Operations Centre (ESOC) in Darmstadt, Germany. The trip is certainly not short: Rosetta will need ten years just to reach the comet. This places extreme demands on its hardware; when the probe meets up with the comet, all instruments must be fully operational, especially since it will have been in “hibernation” for 2 and a half years of its journey. During this ‘big sleep’, all systems, scientific instruments included, are turned off. Only the on-board computer remains active. Twelve cubic metres of technical wizardry Rosetta’s hardware fits into a sort of aluminium box measuring just 12 cubic metres. The scientific payload is mounted in the upper part, while the subsystems - on-board computer, transmitter and propulsion system - are housed below. The lander is fixed to the opposite side of the probe from the steerable antenna. As the spacecraft orbits the comet, the scientific instruments will at all times be pointed towards its surface; the antenna and solar panels will point towards the Earth and Sun respectively. For trajectory and attitude control and for the major braking manœuvres, Rosetta is equipped with 24 thrusters each delivering 10 N. That corresponds to the force needed here on Earth to hold a bag containing 10 apples. Rosetta sets off with 1650 kg of propellant on board, accounting for more than half its mass at lift-off. Just 20% of total mass is available for scientific purposes. So when developing the research instruments the same rule applied as for supermodels: make every gram count. The calculation seems to have worked out right: the main probe will be carrying 11 scientific instruments and the Rosetta lander a further ten. They will analyse the composition and structure of the comet’s nucleus and study its interaction with the solar wind and the interplanetary plasma. Rosetta - unplugged “To provide the probe with the power it needs in space, we have given it the biggest solar panels ever carried by a European satellite,” Manfred Warhaut explained. “ These cells are its only source of electricity.” They span 32 metres tip to tip while, at 64 m2 the surface area is comparable to that of a two-bedroom flat. The panels may be rotated through 180° to catch the maximum amount of sunlight. These dimensions are also essential because when Rosetta meets Churyumov-Gerasimenko it will be 675 million kilometres away from the Sun. At that distance solar radiation is very weak and the solar collectors will supply only 440 W of power - compared with 8000 W towards the end of the mission when the two companions come closest to the Sun (at some 150 million kilometres from our star distance). “The probe is also equipped with a set of four 10-amp-hour batteries to maintain power supply while Rosetta flies in the shadow of the comet.” Rosetta lander - standing on its own three legs The Rosetta lander is another of the mission’s technical highlights. Using its scientific instruments, its job will be to investigate the comet’s surface on location. Thanks to a mechanical arm, the lander will operate in a two-metre radius. The soft landing is a particular problem given the extremely weak gravitational force exerted by the very small comet nucleus; the lander, weighing in at 100 kg on Earth, will on the comet be as light as a sheet of paper . If there were the slightest recoil, it would bounce back uncontrollably like a rubber ball. To make sure this doesn’t happen, the lander’s three legs are equipped with special shock-absorbers which take up most of the kinetic energy. The legs are also fitted with ice pitons; these bore into the ground immediately on touchdown. At the same moment, the lander fires a harpoon to anchor it to the ground - an opportunity also to investigate the mechanical properties of the surface. “If everything goes according to plan, the mission results could well fundamentally expand our knowledge of comets, just as the Rosetta Stone, after which the probe is named, helped unravel the mystery of Egyptian hieroglyphics,” said Manfred Warhaut. For further information on Rosetta and ESA projects, please consult our portal at: http://www.esa.int/science or directly at http://www.esa.int/rosetta

  9. A Proposal for More Sophisticated Normative Principles in Introductory Economics

    Science.gov (United States)

    Schmidt, Stephen

    2017-01-01

    Introductory textbooks teach a simple normative story about the importance of maximizing economic surplus that supports common policy claims. There is little defense of the claim that maximizing surplus is normatively important, which is not obvious to non-economists. Difficulties with the claim that society should maximize surplus are generally…

  10. Sophistication of operator training using BWR plant simulator

    International Nuclear Information System (INIS)

    Ohshiro, Nobuo; Endou, Hideaki; Fujita, Eimitsu; Miyakita, Kouji

    1986-01-01

    In Japanese nuclear power stations, owing to the improvement of fuel management, thorough maintenance and inspection, and the improvement of facilities, high capacity ratio has been attained. The thorough training of operators in nuclear power stations also contributes to it sufficiently. The BWR operator training center was established in 1971, and started the training of operators in April, 1974. As of the end of March, 1986, more than 1800 trainees completed training. At present, in the BWR operator training center, No.1 simulator of 800 MW class and No.2 simulator of 1100 MW class are operated for training. In this report, the method, by newly adopting it, good result was obtained, is described, that is, the method of introducing the feeling of being present on the spot into the place of training, and the new testing method introduced in retraining course. In the simulator training which is apt to place emphasis on a central control room, the method of stimulating trainees by playing the part of correspondence on the spot and heightening the training effect of multiple monitoring was tried, and the result was confirmed. The test of confirmation on the control board was added. (Kako, I.)

  11. Constructing a Sophistication Index as a Method of Market ...

    African Journals Online (AJOL)

    This study investigates the process of index construction as a means of measuring a hypothetical construct that can typically not be measured by a single question or item and applying it as a method of market segmentation. The availability of incidental secondary data provided a relevant quantitative basis to illustrate this ...

  12. Solving Real-Life Problems: Future Mobile Technology Sophistication

    Directory of Open Access Journals (Sweden)

    FARHAN SHAFIQ

    2016-07-01

    Full Text Available Almost all the human being real life concerned domains are taking advantage of latest technologies for enhancing their process, procedures and operations. This integration of technological innovations provides ease of access, flexibility, transparency, reliability and speed for the concerned process and procedures. Rapid growth of ICT (Information and Communication Technology and MT (Mobile Technology provides opportunity to redesign and reengineered the human routines? life activities process and procedures. Technology integration and adoption in routine life activities may serves compensatory mechanism to assist the population in different manner such as monitoring older adults and children at homes, provides security assistance, monitoring and recording patients vital signs automatically, controlling and monitoring equipments and devices, providing assistance in shopping, banking and education as well. Disasters happened suddenly, destroy everything indiscriminately. Adoption and integration of latest technologies including ICT and MT can enhance the current disaster management process, procedures and operations. This research study focuses the impacts of latest and emerging technology trends in routine life activities and surrounds their potential strength to improve and enhance disaster management activities. MT is providing a promising platform for facilitating people to enhance their routine life activities. This research argue that integration and adoption of mobile computing in disaster management domain can enhance disaster management activities with promising minimizing error, quick information assembling, quick response based on technology manipulation and prioritizing action.

  13. Sophisticated visualization algorithms for analysis of multidimensional experimental nuclear spectra

    International Nuclear Information System (INIS)

    Morhac, M.; Kliman, J.; Matousek, V.; Turzo, I.

    2004-01-01

    This paper describes graphical models of visualization of 2-, 3-, 4-dimensional scalar data used in nuclear data acquisition, processing and visualization system developed at the Institute of Physics, Slovak Academy of Sciences. It focuses on presentation of nuclear spectra (histograms). However it can be successfully applied for visualization of arrays of other data types. In the paper we present conventional as well as new developed surface and volume rendering visualization techniques used (Authors)

  14. Modern devices the simple physics of sophisticated technology

    CERN Document Server

    Joseph, Charles L

    2016-01-01

    This book discusses the principles of physics through applications of state-of-the-art technologies and advanced instruments. The authors use diagrams, sketches, and graphs coupled with equations and mathematical analysis to enhance the reader's understanding of modern devices. Readers will learn to identify common underlying physical principles that govern several types of devices, while gaining an understanding of the performance trade-off imposed by the physical limitations of various processing methods. The topics discussed in the book assume readers have taken an introductory physics course, college algebra, and have a basic understanding of calculus. * Describes the basic physics behind a large number of devices encountered in everyday life, from the air conditioner to Blu-ray discs * Covers state-of-the-art devices such as spectrographs, photoelectric image sensors, spacecraft systems, astronomical and planetary observatories, biomedical imaging instruments, particle accelerators, and jet engines * Inc...

  15. More Sophisticated Fits of the Oribts of Haumea's Interacting Moons

    Science.gov (United States)

    Oldroyd, William Jared; Ragozzine, Darin; Porter, Simon

    2018-04-01

    Since the discovery of Haumea's moons, it has been a challenge to model the orbits of its moons, Hi’iaka and Namaka. With many precision HST observations, Ragozzine & Brown 2009 succeeded in calculating a three-point mass model which was essential because Keplerian orbits were not a statistically acceptable fit. New data obtained in 2010 could be fit by adding a J2 and spin pole to Haumea, but new data from 2015 was far from the predicted locations, even after an extensive exploration using Bayesian Markov Chain Monte Carlo methods (using emcee). Here we report on continued investigations as to why our model cannot fit the full 10-year baseline of data. We note that by ignoring Haumea and instead examining the relative motion of the two moons in the Hi’iaka centered frame leads to adequate fits for the data. This suggests there are additional parameters connected to Haumea that will be required in a full model. These parameters are potentially related to photocenter-barycenter shifts which could be significant enough to affect the fitting process; these are unlikely to be caused by the newly discovered ring (Ortiz et al. 2017) or by unknown satellites (Burkhart et al. 2016). Additionally, we have developed a new SPIN+N-bodY integrator called SPINNY that self-consistently calculates the interactions between n-quadrupoles and is designed to test the importance of other possible effects (Haumea C22, satellite torques on the spin-pole, Sun, etc.) on our astrometric fits. By correctly determining the orbit of Haumea’s satellites we develop a better understanding of the physical properties of each of the objects with implications for the formation of Haumea, its moons, and its collisional family.

  16. Particle tracking in sophisticated CAD models for simulation purposes

    International Nuclear Information System (INIS)

    Sulkimo, J.; Vuoskoski, J.

    1995-01-01

    The transfer of physics detector models from computer aided design systems to physics simulation packages like GEANT suffers from certain limitations. In addition, GEANT is not able to perform particle tracking in CAD models. We describe an application which is able to perform particle tracking in boundary models constructed in CAD systems. The transfer file format used is the new international standard, STEP. The design and implementation of the application was carried out using object-oriented techniques. It will be integrated in the future object-oriented version of GEANT. (orig.)

  17. Particle tracking in sophisticated CAD models for simulation purposes

    Science.gov (United States)

    Sulkimo, J.; Vuoskoski, J.

    1996-02-01

    The transfer of physics detector models from computer aided design systems to physics simulation packages like GEANT suffers from certain limitations. In addition, GEANT is not able to perform particle tracking in CAD models. We describe an application which is able to perform particle tracking in boundary models constructed in CAD systems. The transfer file format used is the new international standard, STEP. The design and implementation of the application was carried out using object-oriented techniques. It will be integrated in the future object-oriented version of GEANT.

  18. Sophistication of computational science and fundamental physics simulations

    International Nuclear Information System (INIS)

    Ishiguro, Seiji; Ito, Atsushi; Usami, Shunsuke; Ohtani, Hiroaki; Sakagami, Hitoshi; Toida, Mieko; Hasegawa, Hiroki; Horiuchi, Ritoku; Miura, Hideaki

    2016-01-01

    Numerical experimental reactor research project is composed of the following studies: (1) nuclear fusion simulation research with a focus on specific physical phenomena of specific equipment, (2) research on advanced simulation method to increase predictability or expand its application range based on simulation, (3) visualization as the foundation of simulation research, (4) research for advanced computational science such as parallel computing technology, and (5) research aiming at elucidation of fundamental physical phenomena not limited to specific devices. Specifically, a wide range of researches with medium- to long-term perspectives are being developed: (1) virtual reality visualization, (2) upgrading of computational science such as multilayer simulation method, (3) kinetic behavior of plasma blob, (4) extended MHD theory and simulation, (5) basic plasma process such as particle acceleration due to interaction of wave and particle, and (6) research related to laser plasma fusion. This paper reviews the following items: (1) simultaneous visualization in virtual reality space, (2) multilayer simulation of collisionless magnetic reconnection, (3) simulation of microscopic dynamics of plasma coherent structure, (4) Hall MHD simulation of LHD, (5) numerical analysis for extension of MHD equilibrium and stability theory, (6) extended MHD simulation of 2D RT instability, (7) simulation of laser plasma, (8) simulation of shock wave and particle acceleration, and (9) study on simulation of homogeneous isotropic MHD turbulent flow. (A.O.)

  19. Solving real-life problems: future mobile technology sophistication

    International Nuclear Information System (INIS)

    Shafiq, F.; Ahsan, K.; Nadeem, A.

    2016-01-01

    Almost all the human being real life concerned domains are taking advantage of latest technologies for enhancing their process, procedures and operations. This integration of technological innovations provides ease of access, flexibility, transparency, reliability and speed for the concerned process and procedures. Rapid growth of ICT (Information and Communication Technology) and MT (Mobile Technology) provides opportunity to redesign and re-engineered the human routines life activities process and procedures. Technology integration and adoption in routine life activities may serves compensatory mechanism to assist the population in different manner such as monitoring older adults and children at homes, provides security assistance, monitoring and recording patients vital signs automatically, controlling and monitoring equipments and devices, providing assistance in shopping, banking and education as well. Disasters happened suddenly, destroy everything indiscriminately. Adoption and integration of latest technologies including ICT and MT can enhance the current disaster management process, procedures and operations. This research study focuses the impacts of latest and emerging technology trends in routine life activities and surrounds their potential strength to improve and enhance disaster management activities. MT is providing a promising platform for facilitating people to enhance their routine life activities. This research argue that integration and adoption of mobile computing in disaster management domain can enhance disaster management activities with promising minimizing error, quick information assembling, quick response based on technology manipulation and prioritizing action. (author)

  20. Sophistication of burnup analysis system for fast reactor (2)

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Hirai, Yasushi; Tatsumi, Masahiro

    2010-10-01

    Improvement on prediction accuracy for neutronics characteristics of fast reactor cores is one of the most important study domains in terms of both achievement of high economical plant efficiency based on reasonably advanced designs and increased reliability and safety margins. In former study, considerable improvement on prediction accuracy in neutronics design has been achieved in the development of the unified cross-section set as a fruit of a series of critical experiments such as JUPITER in application of the reactor constant adjustments. For design of fast reactor cores improvement of not only static characteristics but also burnup characteristics is very important. For such purpose, it is necessary to improve the prediction accuracy on burnup characteristics using actual burnup data of 'JOYO' and 'MONJU', experimental and prototype fast reactors. Recently, study on effective burnup method for minor actinides becomes important theme. However, there is a problem that analysis work tends to become inefficient for lack of functionality suitable for analysis of composition change due to burnup since the conventional analysis system is targeted to critical assembly systems. Therefore development of burnup analysis system for fast reactors with modularity and flexibility is being done that would contribute to actual core design work and improvement of prediction accuracy. In the previous study, we have developed a prototype system which has functions of performing core and burnup calculations using given constant files (PDS files) and information based on simple and easy user input data. It has also functions of fuel shuffling which is indispensable for power reactor analysis systems. In the present study, by extending the prototype system, features for handling of control rods and energy collapse of group constants have been designed and implemented. Computational results from the present analysis system are stored into restart files which can be accessible by users to retrieve detailed information. However, there have been difficulties in actual data management because of complex manner in data access; it was quite hard for anyone other than experts to retrieve data. In the present study, as a remedy for the difficulty, a mechanism for database management has been developed by extending the idea on restart files in order to help user easily access arbitrary data of the results. (author)

  1. The Necessity of Linguistic Sophistication for Social Workers

    Science.gov (United States)

    Cormican, Elin J.; Cormican, John D.

    1977-01-01

    English language study should be introduced into the social work curriculum since various social judgments people make about each other on the basis of dialectal differences may interfere with communication between social workers and their clients, coworkers, or the general community. (Author/LBH)

  2. Capital Gains Realizations of the Rich and Sophisticated

    OpenAIRE

    Alan J. Auerbach; Jonathan M. Siegel

    2000-01-01

    This paper attempts to bring theoretical and empirical research on capital gains realization behavior closer together by considering whether investors who appear to engage more in strategic tax avoidance activity also respond differently to tax rates. We find that such investors exhibit significantly smaller responses to permanent tax rate changes than other investors. Put another way, a larger part of their response to capital gains tax rates reflects timing, consistent with their closer adh...

  3. Few Governing Boards Engage in Sophisticated Financial Planning, Experts Say

    Science.gov (United States)

    Fain, Paul

    2009-01-01

    Financial stewardship by college governing boards too often stops at balancing the budget. That was the message two finance experts presented last week during the annual meeting of the Association of Governing Boards of Universities and Colleges. Furthermore, the yearly budget exercise can give trustees a misperception of their institutions'…

  4. Constructing a sophistication index as a method of market ...

    African Journals Online (AJOL)

    segmentation method offers researchers and marketing practitioners a ..... Pallant (2010) recommends a minimum value of 0.6 for a good analysis. .... a means of profiling segments, stock farmers are not classified as unsophisticated,.

  5. Simplicity, inference and modelling: keeping it sophisticatedly simple

    National Research Council Canada - National Science Library

    Zellner, Arnold; Keuzenkamp, Hugo A; McAleer, Michael

    2001-01-01

    .... What is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience? The book concludes with reflect...

  6. Solving the Sophistication-Population Paradox of Game Refinement Theory

    OpenAIRE

    Xiong , Shuo; Tiwary , Parth ,; Iida , Hiroyuki

    2016-01-01

    Part 4: Short Papers; International audience; A mathematical model of game refinement was proposed based on uncertainty of game outcome. This model has been shown to be useful in measuring the entertainment element in the domains such as boardgames and sport games. However, game refinement theory has not been able to explain the correlation between the popularity of a game and the game refinement value. This paper introduces another aspect in the study of game entertainment, the concept of “a...

  7. Present status and future of the sophisticated work station

    Science.gov (United States)

    Ishida, Haruhisa

    The excellency of the work station is explained, by comparing the functions of software and hardware of work station with those of personal computer. As one of the examples utilizing the functions of work station, desk top publishing is explained. By describing the competition between the Group of ATT · Sun Microsystems which intends to have the leadership by integrating Berkeley version which is most popular at this moment and System V version, and the group led by IBM, future of UNIX as OS of work station is predicted. Development of RISC processor, TRON Plan and Sigma Projects by MITI are also mentioned as its background.

  8. LIDAR COMBINED SCANNING UNIT

    Directory of Open Access Journals (Sweden)

    V. V. Elizarov

    2016-11-01

    Full Text Available Subject of Research. The results of lidar combined scanning unit development for locating leaks of hydrocarbons are presented The unit enables to perform high-speed scanning of the investigated space in wide and narrow angle fields. Method. Scanning in a wide angular field is produced by one-line scanning path by means of the movable aluminum mirror with a frequency of 20Hz and amplitude of 20 degrees of swing. Narrowband scanning is performed along a spiral path by the deflector. The deflection of the beam is done by rotation of the optical wedges forming part of the deflector at an angle of ±50. The control function of the scanning node is performed by a specialized software product written in C# programming language. Main Results. This scanning unit allows scanning the investigated area at a distance of 50-100 m with spatial resolution at the level of 3 cm. The positioning accuracy of the laser beam in space is 15'. The developed scanning unit gives the possibility to browse the entire investigated area for the time not more than 1 ms at a rotation frequency of each wedge from 50 to 200 Hz. The problem of unambiguous definition of the beam geographical coordinates in space is solved at the software level according to the rotation angles of the mirrors and optical wedges. Lidar system coordinates are determined by means of GPS. Practical Relevance. Development results open the possibility for increasing the spatial resolution of scanning systems of a wide range of lidars and can provide high positioning accuracy of the laser beam in space.

  9. Combined dyslipidemia in childhood.

    Science.gov (United States)

    Kavey, Rae-Ellen W

    2015-01-01

    Combined dyslipidemia (CD) is now the predominant dyslipidemic pattern in childhood, characterized by moderate-to-severe elevation in triglycerides and non-high-density lipoprotein cholesterol (non-HDL-C), minimal elevation in low-density lipoprotein cholesterol (LDL-C), and reduced HDL-C. Nuclear magnetic resonance spectroscopy shows that the CD pattern is represented at the lipid subpopulation level as an increase in small, dense LDL and in overall LDL particle number plus a reduction in total HDL-C and large HDL particles, a highly atherogenic pattern. In youth, CD occurs almost exclusively with obesity and is highly prevalent, seen in more than 40% of obese adolescents. CD in childhood predicts pathologic evidence of atherosclerosis and vascular dysfunction in adolescence and young adulthood, and early clinical cardiovascular events in adult life. There is a tight connection between CD, visceral adiposity, insulin resistance, nonalcoholic fatty liver disease, and the metabolic syndrome, suggesting an integrated pathophysiological response to excessive weight gain. Weight loss, changes in dietary composition, and increases in physical activity have all been shown to improve CD significantly in children and adolescents in short-term studies. Most importantly, even small amounts of weight loss are associated with significant decreases in triglyceride levels and increases in HDL-C levels with improvement in lipid subpopulations. Diet change focused on limitation of simple carbohydrate intake with specific elimination of all sugar-sweetened beverages is very effective. Evidence-based recommendations for initiating diet and activity change are provided. Rarely, drug therapy is needed, and the evidence for drug treatment of CD in childhood is reviewed. Copyright © 2015 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  10. A combined PLC and CPU approach to multiprocessor control

    International Nuclear Information System (INIS)

    Harris, J.J.; Broesch, J.D.; Coon, R.M.

    1995-10-01

    A sophisticated multiprocessor control system has been developed for use in the E-Power Supply System Integrated Control (EPSSIC) on the DIII-D tokamak. EPSSIC provides control and interlocks for the ohmic heating coil power supply and its associated systems. Of particular interest is the architecture of this system: both a Programmable Logic Controller (PLC) and a Central Processor Unit (CPU) have been combined on a standard VME bus. The PLC and CPU input and output signals are routed through signal conditioning modules, which provide the necessary voltage and ground isolation. Additionally these modules adapt the signal levels to that of the VME I/O boards. One set of I/O signals is shared between the two processors. The resulting multiprocessor system provides a number of advantages: redundant operation for mission critical situations, flexible communications using conventional TCP/IP protocols, the simplicity of ladder logic programming for the majority of the control code, and an easily maintained and expandable non-proprietary system

  11. Primate empathy: three factors and their combinations for empathy-related phenomena.

    Science.gov (United States)

    Yamamoto, Shinya

    2017-05-01

    Empathy as a research topic is receiving increasing attention, although there seems some confusion on the definition of empathy across different fields. Frans de Waal (de Waal FBM. Putting the altruism back into altruism: the evolution of empathy. Annu Rev Psychol 2008, 59:279-300. doi:10.1146/annurev.psych.59.103006.093625) used empathy as an umbrella term and proposed a comprehensive model for the evolution of empathy with some of its basic elements in nonhuman animals. In de Waal's model, empathy consists of several layers distinguished by required cognitive levels; the perception-action mechanism plays the core role for connecting ourself and others. Then, human-like empathy such as perspective-taking develops in outer layers according to cognitive sophistication, leading to prosocial acts such as targeted helping. I agree that animals demonstrate many empathy-related phenomena; however, the species differences and the level of cognitive sophistication of the phenomena might be interpreted in another way than this simple linearly developing model. Our recent studies with chimpanzees showed that their perspective-taking ability does not necessarily lead to proactive helping behavior. Herein, as a springboard for further studies, I reorganize the empathy-related phenomena by proposing a combination model instead of the linear development model. This combination model is composed of three organizing factors: matching with others, understanding of others, and prosociality. With these three factors and their combinations, most empathy-related matters can be categorized and mapped to appropriate context; this may be a good first step to discuss the evolution of empathy in relation to the neural connections in human and nonhuman animal brains. I would like to propose further comparative studies, especially from the viewpoint of Homo-Pan (chimpanzee and bonobo) comparison. WIREs Cogn Sci 2017, 8:e1431. doi: 10.1002/wcs.1431 For further resources related to this article

  12. Philosophical rhetoric and sophistical dialectic: some implications of Plato’s critique of rhetoric in the Phaedrus and the Sophist

    NARCIS (Netherlands)

    Wagemans, J.H.M.; Blair, J.A.; Farr, D.; Hansen, H.V.; Johnson, R.H.; Tindale, C.W.

    2003-01-01

    My PhD research concentrates on the philosophical backgrounds of the relationship between dialectic and rhetoric. In order to pinpoint the discord between both disciplines, I studied their genesis and early history. In this paper, some characteristics of both disciplines will be outlined by

  13. Conceptual Combination During Sentence Comprehension

    Science.gov (United States)

    Swinney, David; Love, Tracy; Walenski, Matthew; Smith, Edward E.

    2008-01-01

    This experiment examined the time course of integration of modifier-noun (conceptual) combinations during auditory sentence comprehension using cross-modal lexical priming. The study revealed that during ongoing comprehension, there is initial activation of features of the noun prior to activation of (emergent) features of the entire conceptual combination. These results support compositionality in conceptual combination; that is, they indicate that features of the individual words constituting a conceptual combination are activated prior to combination of the words into a new concept. PMID:17576278

  14. Generator of combined logical signals

    International Nuclear Information System (INIS)

    Laviron, Andre; Berard, Claude.

    1982-01-01

    The invention concerns a generator of combined logical signals to form combinations of two outputs at logical level 1 and N-2 outputs at logical level 0, among N generator outputs. This generator is characterized in that it includes a set of N means for storing combinations. Means enable the N storage means to be loaded with the logical levels corresponding to a pre-set starting combination, to control the operations for shifting the contents of the storage means and to control, by transfer facilities, the transfers of contents between these storage means. Controls enable the storage means to be actuated in order to obtain combinations of logical levels 1 and 0. The generation of combinations can be stopped after another pre-set combination. Application is for testing of safety circuits for nuclear power stations [fr

  15. The on-line coupled atmospheric chemistry model system MECO(n) - Part 5: Expanding the Multi-Model-Driver (MMD v2.0) for 2-way data exchange including data interpolation via GRID (v1.0)

    Science.gov (United States)

    Kerkweg, Astrid; Hofmann, Christiane; Jöckel, Patrick; Mertens, Mariano; Pante, Gregor

    2018-03-01

    As part of the Modular Earth Submodel System (MESSy), the Multi-Model-Driver (MMD v1.0) was developed to couple online the regional Consortium for Small-scale Modeling (COSMO) model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM) (see Part 2 of the model documentation). The coupled system is called MECO(n), i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n) system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD) of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry-climate ECHAM/MESSy atmospheric chemistry (EMAC) model and an arbitrary number of (optionally cascaded) instances of the regional chemistry-climate model COSMO/MESSy. Thus, MMD (v1.0) provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model instances.In MMD (v1

  16. The on-line coupled atmospheric chemistry model system MECO(n – Part 5: Expanding the Multi-Model-Driver (MMD v2.0 for 2-way data exchange including data interpolation via GRID (v1.0

    Directory of Open Access Journals (Sweden)

    A. Kerkweg

    2018-03-01

    Full Text Available As part of the Modular Earth Submodel System (MESSy, the Multi-Model-Driver (MMD v1.0 was developed to couple online the regional Consortium for Small-scale Modeling (COSMO model into a driving model, which can be either the regional COSMO model or the global European Centre Hamburg general circulation model (ECHAM (see Part 2 of the model documentation. The coupled system is called MECO(n, i.e., MESSy-fied ECHAM and COSMO models nested n times. In this article, which is part of the model documentation of the MECO(n system, the second generation of MMD is introduced. MMD comprises the message-passing infrastructure required for the parallel execution (multiple programme multiple data, MPMD of different models and the communication of the individual model instances, i.e. between the driving and the driven models. Initially, the MMD library was developed for a one-way coupling between the global chemistry–climate ECHAM/MESSy atmospheric chemistry (EMAC model and an arbitrary number of (optionally cascaded instances of the regional chemistry–climate model COSMO/MESSy. Thus, MMD (v1.0 provided only functions for unidirectional data transfer, i.e. from the larger-scale to the smaller-scale models.Soon, extended applications requiring data transfer from the small-scale model back to the larger-scale model became of interest. For instance, the original fields of the larger-scale model can directly be compared to the upscaled small-scale fields to analyse the improvements gained through the small-scale calculations, after the results are upscaled. Moreover, the fields originating from the two different models might be fed into the same diagnostic tool, e.g. the online calculation of the radiative forcing calculated consistently with the same radiation scheme. Last but not least, enabling the two-way data transfer between two models is the first important step on the way to a fully dynamical and chemical two-way coupling of the various model

  17. A multi-model intercomparison of halogenated very short-lived substances (TransCom-VSLS: linking oceanic emissions and tropospheric transport for a reconciled estimate of the stratospheric source gas injection of bromine

    Directory of Open Access Journals (Sweden)

    R. Hossaini

    2016-07-01

    Full Text Available The first concerted multi-model intercomparison of halogenated very short-lived substances (VSLS has been performed, within the framework of the ongoing Atmospheric Tracer Transport Model Intercomparison Project (TransCom. Eleven global models or model variants participated (nine chemical transport models and two chemistry–climate models by simulating the major natural bromine VSLS, bromoform (CHBr3 and dibromomethane (CH2Br2, over a 20-year period (1993–2012. Except for three model simulations, all others were driven offline by (or nudged to reanalysed meteorology. The overarching goal of TransCom-VSLS was to provide a reconciled model estimate of the stratospheric source gas injection (SGI of bromine from these gases, to constrain the current measurement-derived range, and to investigate inter-model differences due to emissions and transport processes. Models ran with standardised idealised chemistry, to isolate differences due to transport, and we investigated the sensitivity of results to a range of VSLS emission inventories. Models were tested in their ability to reproduce the observed seasonal and spatial distribution of VSLS at the surface, using measurements from NOAA's long-term global monitoring network, and in the tropical troposphere, using recent aircraft measurements – including high-altitude observations from the NASA Global Hawk platform. The models generally capture the observed seasonal cycle of surface CHBr3 and CH2Br2 well, with a strong model–measurement correlation (r  ≥  0.7 at most sites. In a given model, the absolute model–measurement agreement at the surface is highly sensitive to the choice of emissions. Large inter-model differences are apparent when using the same emission inventory, highlighting the challenges faced in evaluating such inventories at the global scale. Across the ensemble, most consistency is found within the tropics where most of the models (8 out of 11 achieve best agreement to

  18. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  19. Combined Environment Acoustic Chamber (CEAC)

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The CEAC imposes combined acoustic, thermal and mechanical loads on aerospace structures. The CEAC is employed to measure structural response and determine...

  20. Unique molecular landscapes in cancer: implications for individualized, curated drug combinations.

    Science.gov (United States)

    Wheler, Jennifer; Lee, J Jack; Kurzrock, Razelle

    2014-12-15

    With increasingly sophisticated technologies in molecular biology and "omic" platforms to analyze patients' tumors, more molecular diversity and complexity in cancer are being observed. Recently, we noted unique genomic profiles in a group of patients with metastatic breast cancer based on an analysis with next-generation sequencing. Among 57 consecutive patients, no two had the same molecular portfolio. Applied genomics therefore appears to represent a disruptive innovation in that it unveils a heterogeneity to metastatic cancer that may be ill-suited to canonical clinical trials and practice paradigms. Upon recognizing that patients have unique tumor landscapes, it is possible that there may be a "mismatch" between our traditional clinical trials system that selects patients based on common characteristics to evaluate a drug (drug-centric approach) and optimal treatment based on curated, individualized drug combinations for each patient (patient-centric approach). ©2014 American Association for Cancer Research.

  1. Space nuclear-power reactor design based on combined neutronic and thermal-fluid analyses

    International Nuclear Information System (INIS)

    Koenig, D.R.; Gido, R.G.; Brandon, D.I.

    1985-01-01

    The design and performance analysis of a space nuclear-power system requires sophisticated analytical capabilities such as those developed during the nuclear rocket propulsion (Rover) program. In particular, optimizing the size of a space nuclear reactor for a given power level requires satisfying the conflicting requirements of nuclear criticality and heat removal. The optimization involves the determination of the coolant void (volume) fraction for which the reactor diameter is a minimum and temperature and structural limits are satisfied. A minimum exists because the critical diameter increases with increasing void fraction, whereas the reactor diameter needed to remove a specified power decreases with void fraction. The purpose of this presentation is to describe and demonstrate our analytical capability for the determination of minimum reactor size. The analysis is based on combining neutronic criticality calculations with OPTION-code thermal-fluid calculations

  2. Time, dose and volume factors in interstitial brachytherapy combined with external irradiation for oral tongue carcinoma

    International Nuclear Information System (INIS)

    Yorozu, Atsunori

    1996-01-01

    This is a retrospective analysis of 136 patients with squamous cell carcinoma of stages I and II of the oral tongue who were treated with interstitial brachytherapy alone or in combination with external irradiation between 1976 and 1991. Control of the primary lesion and the occurrence of late complications were analyzed with respect to dose, time and tumor size with the Cox hazard model. The 5-year survival rates for stages I and II were 84.5% and 75.6%. The 5-year primary control rate was 91.3% for stage I and 77.3% for stage II (p 50 Gy compared with a brachytherapy dose 30 mm. Late complications should be reduced by using a spacer, improvements in dental and oral hygiene, and a sophisticated implant method. (author)

  3. Symbol Stream Combining Versus Baseband Combining for Telemetry Arraying

    Science.gov (United States)

    Divsalar, D.

    1983-01-01

    The objectives of this article are to investigate and analyze the problem of combining symbol streams from many Deep Space Network stations to enhance bit signal-to-noise ratio and to compare the performance of this combining technique with baseband combining. Symbol stream combining (SSC) has some advantages and some disadvantages over baseband combining (BBC). The SSC suffers almost no loss in combining the digital data and no loss due to the transmission of the digital data by microwave links between the stations. The BBC suffers 0.2 dB loss due to alignment and combining the IF signals and 0.2 dB loss due to transmission of signals by microwave links. On the other hand, the losses in the subcarrier demodulation assembly (SDA) and in the symbol synchronization assembly (SSA) for SSC are more than the losses in the SDA and SSA for BBC. It is shown that SSC outperforms BBC by about 0.35 dB (in terms of the required bit energy-to-noise spectral density for a bit error rate of 1,000) for an array of three DSN antennas, namely 64 m, 34m(T/R) and 34m(R).

  4. Revised Accounting for Business Combinations

    Science.gov (United States)

    Wilson, Arlette C.; Key, Kimberly

    2008-01-01

    The Financial Accounting Standards Board (FASB) has recently issued Statement of Financial Accounting Standards No. 141 (Revised 2007) Business Combinations. The object of this Statement is to improve the relevance, representational faithfulness, and comparability of reported information about a business combination and its effects. This Statement…

  5. Modified Aggressive Packet Combining Scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2010-06-01

    In this letter, a few schemes are presented to improve the performance of aggressive packet combining scheme (APC). To combat error in computer/data communication networks, ARQ (Automatic Repeat Request) techniques are used. Several modifications to improve the performance of ARQ are suggested by recent research and are found in literature. The important modifications are majority packet combining scheme (MjPC proposed by Wicker), packet combining scheme (PC proposed by Chakraborty), modified packet combining scheme (MPC proposed by Bhunia), and packet reversed packet combining (PRPC proposed by Bhunia) scheme. These modifications are appropriate for improving throughput of conventional ARQ protocols. Leung proposed an idea of APC for error control in wireless networks with the basic objective of error control in uplink wireless data network. We suggest a few modifications of APC to improve its performance in terms of higher throughput, lower delay and higher error correction capability. (author)

  6. Minilaparoscopic technique for inguinal hernia repair combining transabdominal pre-peritoneal and totally extraperitoneal approaches.

    Science.gov (United States)

    Carvalho, Gustavo L; Loureiro, Marcelo P; Bonin, Eduardo A; Claus, Christiano P; Silva, Frederico W; Cury, Antonio M; Fernandes, Flavio A M

    2012-01-01

    Endoscopic surgical repair of inguinal hernia is currently conducted using 2 techniques: the totally extraperitoneal (TEP) and the transabdominal (TAPP) hernia repair. The TEP procedure is technically advantageous, because of the use of no mesh fixation and the elimination of the peritoneal flap, leading to less postoperative pain and faster recovery. The drawback is that TEP is not performed as frequently, because of its complexity and longer learning curve. In this study, we propose a hybrid technique that could potentially become the gold standard of minimally invasive inguinal hernia surgery. This will be achieved by combining established advantages of TEP and TAPP associated with the precision and cosmetics of minilaparoscopy (MINI). Between January and July 2011, 22 patients were admitted for endoscopic inguinal hernia repair. The combined technique was initiated with TAPP inspection and direct visualization of a minilaparoscopic trocar dissection of the preperitoneum space. A10-mm trocar was then placed inside the previously dissected preperitoneal space, using the same umbilical TAPP skin incision. Minilaparoscopic retroperitoneal dissection was completed by TEP, and the surgical procedure was finalized with intraperitoneal review and correction of the preperitoneal work. The minilaparoscopic TEP-TAPP combined approach for inguinal hernia is feasible, safe, and allows a simple endoscopic repair. This is achieved by combining features and advantages of both TAPP and TEP techniques using precise and sophisticated MINI instruments. Minilaparoscopic preperitoneal dissection allows a faster and easier creation of the preperitoneal space for the TEP component of the procedure.

  7. Uniportal anatomic combined unusual segmentectomies.

    Science.gov (United States)

    González-Rivas, Diego; Lirio, Francisco; Sesma, Julio

    2017-01-01

    Nowadays, sublobar anatomic resections are gaining momentum as a valid alternative for early stage lung cancer. Despite being technically demanding, anatomic segmentectomies can be performed by uniportal video-assisted thoracic surgery (VATS) approach to combine the benefits of minimally invasiveness with the maximum lung sparing. This procedure can be even more complex if a combined resection of multiple segments from different lobes has to be done. Here we report five cases of combined and unusual segmentectomies done by the same experienced surgeon in high volume institutions to show uniportal VATS is a feasible approach for these complex resections and to share an excellent educational resource.

  8. Combination contraceptives: effects on weight.

    Science.gov (United States)

    Gallo, Maria F; Lopez, Laureen M; Grimes, David A; Carayon, Florence; Schulz, Kenneth F; Helmerhorst, Frans M

    2014-01-29

    Weight gain is often considered a side effect of combination hormonal contraceptives, and many women and clinicians believe that an association exists. Concern about weight gain can limit the use of this highly effective method of contraception by deterring the initiation of its use and causing early discontinuation among users. However, a causal relationship between combination contraceptives and weight gain has not been established. The aim of the review was to evaluate the potential association between combination contraceptive use and changes in weight. In November 2013, we searched the computerized databases CENTRAL (The Cochrane Library), MEDLINE, POPLINE, EMBASE, and LILACS for studies of combination contraceptives, as well as ClinicalTrials.gov and International Clinical Trials Registry Platform (ICTRP). For the initial review, we also wrote to known investigators and manufacturers to request information about other published or unpublished trials not discovered in our search. All English-language, randomized controlled trials were eligible if they had at least three treatment cycles and compared a combination contraceptive to a placebo or to a combination contraceptive that differed in drug, dosage, regimen, or study length. All titles and abstracts located in the literature searches were assessed. Data were entered and analyzed with RevMan. A second author verified the data entered. For continuous data, we calculated the mean difference and 95% confidence interval (CI) for the mean change in weight between baseline and post-treatment measurements using a fixed-effect model. For categorical data, such as the proportion of women who gained or lost more than a specified amount of weight, the Peto odds ratio with 95% CI was calculated. We found 49 trials that met our inclusion criteria. The trials included 85 weight change comparisons for 52 distinct contraceptive pairs (or placebos). The four trials with a placebo or no intervention group did not find

  9. Mode Combinations and International Operations

    DEFF Research Database (Denmark)

    Benito, Gabriel R. G.; Petersen, Bent; Welch, Lawrence S.

    2011-01-01

    reveals that companies tend to combine modes of operation; thereby producing unique foreign operation mode “packages” for given activities and/or countries, and that the packages are liable to be modified over time – providing a potentially important optional path for international expansion. Our data...... key markets (China, UK and USA) as the basis for an exploration of the extent to which, and how and why, companies combine clearly different foreign operation modes. We examine their use of foreign operation mode combinations within given value activities as well as within given countries. The study...

  10. Mode Combinations and International Operations

    DEFF Research Database (Denmark)

    Benito, Gabriel R. G.; Petersen, Bent; Welch, Lawrence S.

    2011-01-01

    reveals that companies tend to combine modes of operation; thereby producing unique foreign operation mode “packages” for given activities and/or countries, and that the packages are liable to be modified over time—providing a potentially important optional path for international expansion. The data show...... markets (China, UK and USA) is used as the basis for an exploration of the extent to which, and how and why, companies combine clearly different foreign operation modes. We examine their use of foreign operation mode combinations within given value activities as well as within given countries. The study...

  11. Combined keratoplasty and cataract extraction.

    Science.gov (United States)

    Demeler, U; Hinzpeter, E N

    1977-04-01

    A short film showing our technique of combined penetrating keratoplasty and intracapsular cataract extraction was shown, and the postoperative results in 72 eyes after an average of 3 years were reported.

  12. Atypical combinations and scientific impact.

    Science.gov (United States)

    Uzzi, Brian; Mukherjee, Satyam; Stringer, Michael; Jones, Ben

    2013-10-25

    Novelty is an essential feature of creative ideas, yet the building blocks of new ideas are often embodied in existing knowledge. From this perspective, balancing atypical knowledge with conventional knowledge may be critical to the link between innovativeness and impact. Our analysis of 17.9 million papers spanning all scientific fields suggests that science follows a nearly universal pattern: The highest-impact science is primarily grounded in exceptionally conventional combinations of prior work yet simultaneously features an intrusion of unusual combinations. Papers of this type were twice as likely to be highly cited works. Novel combinations of prior work are rare, yet teams are 37.7% more likely than solo authors to insert novel combinations into familiar knowledge domains.

  13. Autonomous grain combine control system

    Science.gov (United States)

    Hoskinson, Reed L.; Kenney, Kevin L.; Lucas, James R.; Prickel, Marvin A.

    2013-06-25

    A system for controlling a grain combine having a rotor/cylinder, a sieve, a fan, a concave, a feeder, a header, an engine, and a control system. The feeder of the grain combine is engaged and the header is lowered. A separator loss target, engine load target, and a sieve loss target are selected. Grain is harvested with the lowered header passing the grain through the engaged feeder. Separator loss, sieve loss, engine load and ground speed of the grain combine are continuously monitored during the harvesting. If the monitored separator loss exceeds the selected separator loss target, the speed of the rotor/cylinder, the concave setting, the engine load target, or a combination thereof is adjusted. If the monitored sieve loss exceeds the selected sieve loss target, the speed of the fan, the size of the sieve openings, or the engine load target is adjusted.

  14. Combining supramolecular chemistry with biology

    NARCIS (Netherlands)

    Uhlenheuer, D.A.; Petkau - Milroy, K.; Brunsveld, L.

    2010-01-01

    Supramolecular chemistry has primarily found its inspiration in biological molecules, such as proteins and lipids, and their interactions. Currently the supramolecular assembly of designed compounds can be controlled to great extent. This provides the opportunity to combine these synthetic

  15. Tariff Model for Combined Transport

    Directory of Open Access Journals (Sweden)

    Velimir Kolar

    2002-11-01

    Full Text Available By analysing the cwTen.t situation on the Croatian transportationmarket, and considering all parameters needed forthe development of combined transport, measures are suggestedin order to improve and stimulate its development. Oneof the first measures is the standardisation and introduction ofunique tariffs for combined transport, and then government incentivefor the organisation and development of combinedtransport means and equipment. A significant role in thisshould be set on adequately defined transport policy.

  16. Combining norms to prove termination

    DEFF Research Database (Denmark)

    Genaim, S.; Codish, M.; Gallagher, John Patrick

    2002-01-01

    Automatic termination analysers typically measure the size of terms applying norms which are mappings from terms to the natural numbers. This paper illustrates howt o enable the use of size functions defined as tuples of these simpler norm functions. This approach enables us to simplify the probl...... of the recursive data-types in the program, is often a suitable choice. We first demonstrate the power of combining norm functions and then the adequacy of combining norms based on regular types....

  17. Combined radar and telemetry system

    Energy Technology Data Exchange (ETDEWEB)

    Rodenbeck, Christopher T.; Young, Derek; Chou, Tina; Hsieh, Lung-Hwa; Conover, Kurt; Heintzleman, Richard

    2017-08-01

    A combined radar and telemetry system is described. The combined radar and telemetry system includes a processing unit that executes instructions, where the instructions define a radar waveform and a telemetry waveform. The processor outputs a digital baseband signal based upon the instructions, where the digital baseband signal is based upon the radar waveform and the telemetry waveform. A radar and telemetry circuit transmits, simultaneously, a radar signal and telemetry signal based upon the digital baseband signal.

  18. Combining Vertex-centric Graph Processing with SPARQL for Large-scale RDF Data Analytics

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-27

    Modern applications, such as drug repositioning, require sophisticated analytics on RDF graphs that combine structural queries with generic graph computations. Existing systems support either declarative SPARQL queries, or generic graph processing, but not both. We bridge the gap by introducing Spartex, a versatile framework for complex RDF analytics. Spartex extends SPARQL to support programs that combine seamlessly generic graph algorithms (e.g., PageRank, Shortest Paths, etc.) with SPARQL queries. Spartex builds on existing vertex-centric graph processing frameworks, such as Graphlab or Pregel. It implements a generic SPARQL operator as a vertex-centric program that interprets SPARQL queries and executes them efficiently using a built-in optimizer. In addition, any graph algorithm implemented in the underlying vertex-centric framework, can be executed in Spartex. We present various scenarios where our framework simplifies significantly the implementation of complex RDF data analytics programs. We demonstrate that Spartex scales to datasets with billions of edges, and show that our core SPARQL engine is at least as fast as the state-of-the-art specialized RDF engines. For complex analytical tasks that combine generic graph processing with SPARQL, Spartex is at least an order of magnitude faster than existing alternatives.

  19. Food irradiation and combination processes

    International Nuclear Information System (INIS)

    Campbell-Platt, G.; Grandison, A.S.

    1990-01-01

    International approval of food irradiation is being given for the use of low and medium doses. Uses are being permitted for different categories of foods with maximum levels being set between 1 and 10 kGy. To maximize the effectiveness of these mild irradiation treatments while minimizing any organoleptic quality changes, combination processes of other technologies with irradiation will be useful. Combinations most likely to be exploited in optimal food processing include the use of heat, low temperature, and modified-atmosphere packaging. Because irradiation does not have a residual effect, the food packaging itself becomes an important component of a successful process. These combination processes provide promising alternatives to the use of chemical preservatives or harsher processing techniques. (author)

  20. Combined oral contraceptives: venous thrombosis.

    Science.gov (United States)

    de Bastos, Marcos; Stegeman, Bernardine H; Rosendaal, Frits R; Van Hylckama Vlieg, Astrid; Helmerhorst, Frans M; Stijnen, Theo; Dekkers, Olaf M

    2014-03-03

    Combined oral contraceptive (COC) use has been associated with venous thrombosis (VT) (i.e., deep venous thrombosis and pulmonary embolism). The VT risk has been evaluated for many estrogen doses and progestagen types contained in COC but no comprehensive comparison involving commonly used COC is available. To provide a comprehensive overview of the risk of venous thrombosis in women using different combined oral contraceptives. Electronic databases (Pubmed, Embase, Web of Science, Cochrane, CINAHL, Academic Search Premier and ScienceDirect) were searched in 22 April 2013 for eligible studies, without language restrictions. We selected studies including healthy women taking COC with VT as outcome. The primary outcome of interest was a fatal or non-fatal first event of venous thrombosis with the main focus on deep venous thrombosis or pulmonary embolism. Publications with at least 10 events in total were eligible. The network meta-analysis was performed using an extension of frequentist random effects models for mixed multiple treatment comparisons. Unadjusted relative risks with 95% confidence intervals were reported.Two independent reviewers extracted data from selected studies. 3110 publications were retrieved through a search strategy; 25 publications reporting on 26 studies were included. Incidence of venous thrombosis in non-users from two included cohorts was 0.19 and 0.37 per 1 000 person years, in line with previously reported incidences of 0,16 per 1 000 person years. Use of combined oral contraceptives increased the risk of venous thrombosis compared with non-use (relative risk 3.5, 95% confidence interval 2.9 to 4.3). The relative risk of venous thrombosis for combined oral contraceptives with 30-35 μg ethinylestradiol and gestodene, desogestrel, cyproterone acetate, or drospirenone were similar and about 50-80% higher than for combined oral contraceptives with levonorgestrel. A dose related effect of ethinylestradiol was observed for gestodene

  1. Planned combined radiotherapy and surgery

    International Nuclear Information System (INIS)

    Silverman, C.L.; Marks, J.E.

    1987-01-01

    Though the planned combined use of surgery and radiation has been shown to be beneficial for other tumors, the authors feel that the present evidence is far from persuasive in demonstrating a definite superiority of combined therapy over surgery or radiation alone for advanced laryngeal tumors. The actuarial or disease-free survival rates for patients treated with combined therapy have not been significantly increased over those obtained with a single modality in any randomized, well-controlled study, although the trend is toward improved local regional control. Many of the retrospective studies are probably flawed by selection bias; the patients selected for combined treatment generally have more advanced cancers and represent a worse prognostic group. It is clear from this review that the positive value of irradiation for advanced transglottic and supraglottic tumors needs to be documented by a controlled study that compares surgery alone with salvage radiation at time of recurrence to surgery plus adjuvant radiation. The authors feel that such a study is needed to put to rest the present controversy before they can advocate a course of treatment that is expensive, time-consuming, and difficult for the patients to tolerate owing to severe acute side effects and potentially morbid late effects (xerostomia, necrosis) that can greatly lessen the quality of life for these patients

  2. SUPPLEMENTARY INFORMATION A combined Electrochemical ...

    Indian Academy of Sciences (India)

    DELL

    A combined Electrochemical and Theoretical study of pyridine-based Schiff bases as novel corrosion inhibitors for mild steel in hydrochloric acid medium. PARUL DOHAREa, M A QURAISHIb* and I B OBOTb. aDepartment of Chemistry, Indian Institute of Technology, Banaras Hindu University, Varanasi, Uttar. Pradesh 221 ...

  3. Combination Chemoprevention with Grape Antioxidants

    OpenAIRE

    Singh, Chandra K.; Siddiqui, Imtiaz A.; El-Abd, Sabah; Mukhtar, Hasan; Ahmad, Nihal

    2016-01-01

    Antioxidant ingredients present in grape have been extensively investigated for their cancer chemopreventive effects. However, much of the work has been done on individual ingredients, especially focusing on resveratrol and quercetin. Phytochemically, whole grape represents a combination of numerous phytonutrients. Limited research has been done on the possible synergistic/additive/antagonistic interactions among the grape constituents. Among these phytochemical constituents of grapes, resver...

  4. Combining Paraconsistent Logic with Argumentation

    NARCIS (Netherlands)

    Grooters, Diana; Prakken, Hendrik

    2014-01-01

    One tradition in the logical study of argumentation is to allow for arguments that combine strict and defeasible inference rules, and to derive the strict inference rules from a logic at least as strong as classical logic. An unsolved problem in this tradition is how the trivialising effect of the

  5. Combination treatment of neuropathic pain

    DEFF Research Database (Denmark)

    Holbech, Jakob Vormstrup; Jung, Anne; Jonsson, Torsten

    2017-01-01

    BACKGROUND: Current Danish treatment algorithms for pharmacological treatment of neuropathic pain (NeP) are tricyclic antidepressants (TCA), gabapentin and pregabalin as first-line treatment for the most common NeP conditions. Many patients have insufficient pain relief on monotherapy, but combin...

  6. Methodology for combining dynamic responses

    International Nuclear Information System (INIS)

    Cudlin, R.; Hosford, S.; Mattu, R.; Wichman, K.

    1978-09-01

    The NRC has historically required that the structural/mechanical responses due to various accident loads and loads caused by natural phenomena, (such as earthquakes) be combined when analyzing structures, systems, and components important to safety. Several approaches to account for the potential interaction of loads resulting from accidents and natural phenomena have been used. One approach, the so-called absolute or linear summation (ABS) method, linearly adds the peak structural responses due to the individual dynamic loads. In general, the ABS method has also reflected the staff's conservative preference for the combination of dynamic load responses. A second approach, referred to as SRSS, yields a combined response equal to the square root of the sum of the squares of the peak responses due to the individual dynamic loads. The lack of a physical relationship between some of the loads has raised questions as to the proper methodology to be used in the design of nuclear power plants. An NRR Working Group was constituted to examine load combination methodologies and to develop a recommendation concerning criteria or conditions for their application. Evaluations of and recommendations on the use of the ABS and SRSS methods are provided in the report

  7. Combined enteral and parenteral nutrition.

    Science.gov (United States)

    Wernerman, Jan

    2012-03-01

    To review and discuss the evidence and arguments to combine enteral nutrition and parenteral nutrition in the ICU, in particular with reference to the Early Parenteral Nutrition Completing Enteral Nutrition in Adult Critically Ill Patients (EPaNIC) study. The EPaNIC study shows an advantage in terms of discharges alive from the ICU when parenteral nutrition is delayed to day 8 as compared with combining enteral nutrition and parenteral nutrition from day 3 of ICU stay. The difference between the guidelines from the European Society of Enteral and Parenteral Nutrition in Europe and American Society for Parenteral and Enteral Nutrition/Society of Critical Care Medicine in North America concerning the combination of enteral nutrition and parenteral nutrition during the initial week of ICU stay was reviewed. The EPaNIC study clearly demonstrates that early parenteral nutrition in the ICU is not in the best interests of most patients. Exactly at what time point the combination of enteral nutrition and parenteral nutrition should be considered is still an open question.

  8. "Smart" nickel oxide based core–shell nanoparticles for combined chemo and photodynamic cancer therapy

    Directory of Open Access Journals (Sweden)

    Bano S

    2016-07-01

    Full Text Available Shazia Bano,1–3,* Samina Nazir,2,* Saeeda Munir,3 Mohamed Fahad AlAjmi,4 Muhammad Afzal,1 Kehkashan Mazhar3 1Department of Physics, The Islamia University of Bahawalpur, 2Nanosciences and Technology Department, National Centre for Physics, Islamabad, 3Institute of Biomedical and Genetic Engineering, Islamabad, Pakistan; 4College of Pharmacy, King Saud University, Riyadh, Kingdom of Saudi Arabia *These authors contributed equally to this work Abstract: We report “smart” nickel oxide nanoparticles (NOPs as multimodal cancer therapy agent. Water-dispersible and light-sensitive NiO core was synthesized with folic acid (FA connected bovine serum albumin (BSA shell on entrapped doxorubicin (DOX. The entrapped drug from NOP-DOX@BSA-FA was released in a sustained way (64 hours, pH=5.5, dark conditions while a robust release was found under red light exposure (in 1/2 hour under λmax=655 nm, 50 mW/cm2, at pH=5.5. The cell viability, thiobarbituric acid reactive substances and diphenylisobenzofuran assays conducted under light and dark conditions revealed a high photodynamic therapy potential of our construct. Furthermore, we found that the combined effect of DOX and NOPs from NOP-DOX@BSA-FA resulted in cell death approximately eightfold high compared to free DOX. We propose that NOP-DOX@BSA-FA is a potential photodynamic therapy agent and a collective drug delivery system for the systemic administration of cancer chemotherapeutics resulting in combination therapy. Keywords: light-triggered drug release, cancer, bovine serum albumin, multi-model therapy

  9. Airbreathing combined cycle engine systems

    Science.gov (United States)

    Rohde, John

    1992-01-01

    The Air Force and NASA share a common interest in developing advanced propulsion systems for commercial and military aerospace vehicles which require efficient acceleration and cruise operation in the Mach 4 to 6 flight regime. The principle engine of interest is the turboramjet; however, other combined cycles such as the turboscramjet, air turborocket, supercharged ejector ramjet, ejector ramjet, and air liquefaction based propulsion are also of interest. Over the past months careful planning and program implementation have resulted in a number of development efforts that will lead to a broad technology base for those combined cycle propulsion systems. Individual development programs are underway in thermal management, controls materials, endothermic hydrocarbon fuels, air intake systems, nozzle exhaust systems, gas turbines and ramjet ramburners.

  10. Combined Surgical Treatment of Gynecomastia

    Directory of Open Access Journals (Sweden)

    Yordanov Y.

    2015-05-01

    Full Text Available Surgical treatment of gynecomastia could present unique challenges for the plastic surgeon. Achieving a good balance between effectiveness of the selected approach and the satisfactory aesthetic outcome often is a difficult endeavor. Optimal surgical treatment involves a combination of liposuction and direct excision. In the present study the charts of 11 patients treated with suction-assisted liposuction and direct surgical excision were retrospectively reviewed; a special emphasis is placed on the surgical technique. The mean follow-up period of the patients was 11.6 months. No infection, hematoma, nipple-areola complex necrosis and nipple retraction was encountered in this series. The combined surgical treatment of gynecomastia has shown to be a reliable technique in both small and moderate breast enlargement including those with skin excess.

  11. Chemical and natural stressors combined:

    DEFF Research Database (Denmark)

    Gergs, André; Zenker, Armin; Grimm, Volker

    2013-01-01

    In addition to natural stressors, populations are increasingly exposed to chemical pollutants released into the environment. We experimentally demonstrate the loss of resilience for Daphnia magna populations that are exposed to a combination of natural and chemical stressors even though effects...... demonstrates that population size can be a poor endpoint for risk assessments of chemicals and that ignoring disturbance interactions can lead to severe underestimation of extinction risk...

  12. Combination chemoprevention with grape antioxidants.

    Science.gov (United States)

    Singh, Chandra K; Siddiqui, Imtiaz A; El-Abd, Sabah; Mukhtar, Hasan; Ahmad, Nihal

    2016-06-01

    Antioxidant ingredients present in grape have been extensively investigated for their cancer chemopreventive effects. However, much of the work has been done on individual ingredients, especially focusing on resveratrol and quercetin. Phytochemically, whole grape represents a combination of numerous phytonutrients. Limited research has been done on the possible synergistic/additive/antagonistic interactions among the grape constituents. Among these phytochemical constituents of grapes, resveratrol, quercetin, kaempferol, catechin, epicatechin, and anthocyanins (cyanidin and malvidin) constitute more than 70% of the grape polyphenols. Therefore, these have been relatively well studied for their chemopreventive effects against a variety of cancers. While a wealth of information is available individually on cancer chemopreventive/anti-proliferative effects of resveratrol and quercetin, limited information is available regarding the other major constituents of grape. Studies have also suggested that multiple grape antioxidants, when used in combination, alone or with other agents/drugs show synergistic or additive anti-proliferative response. Based on strong rationale emanating from published studies, it seems probable that a combination of multiple grape ingredients alone or together with other agents could impart 'additive synergism' against cancer. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Imagining a Stata / Python Combination

    Science.gov (United States)

    Fiedler, James

    2012-01-01

    There are occasions when a task is difficult in Stata, but fairly easy in a more general programming language. Python is a popular language for a range of uses. It is easy to use, has many high ]quality packages, and programs can be written relatively quickly. Is there any advantage in combining Stata and Python within a single interface? Stata already offers support for user-written programs, which allow extensive control over calculations, but somewhat less control over graphics. Also, except for specifying output, the user has minimal programmatic control over the user interface. Python can be used in a way that allows more control over the interface and graphics, and in so doing provide a roundabout method for satisfying some user requests (e.g., transparency levels in graphics and the ability to clear the results window). My talk will explore these ideas, present a possible method for combining Stata and Python, and give examples to demonstrate how this combination might be useful.

  14. Combining Orthogonal Chain-End Deprotections and Thiol-Maleimide Michael Coupling: Engineering Discrete Oligomers by an Iterative Growth Strategy.

    Science.gov (United States)

    Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin

    2017-10-23

    Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Future directions in psychological assessment: combining evidence-based medicine innovations with psychology's historical strengths to enhance utility.

    Science.gov (United States)

    Youngstrom, Eric A

    2013-01-01

    Assessment has been a historical strength of psychology, with sophisticated traditions of measurement, psychometrics, and theoretical underpinnings. However, training, reimbursement, and utilization of psychological assessment have been eroded in many settings. Evidence-based medicine (EBM) offers a different perspective on evaluation that complements traditional strengths of psychological assessment. EBM ties assessment directly to clinical decision making about the individual, uses simplified Bayesian methods explicitly to integrate assessment data, and solicits patient preferences as part of the decision-making process. Combining the EBM perspective with psychological assessment creates a hybrid approach that is more client centered, and it defines a set of applied research topics that are highly clinically relevant. This article offers a sequence of a dozen facets of the revised assessment process, along with examples of corollary research studies. An eclectic integration of EBM and evidence-based assessment generates a powerful hybrid that is likely to have broad applicability within clinical psychology and enhance the utility of psychological assessments.

  16. Combination of Deterministic and Probabilistic Meteorological Models to enhance Wind Farm Power Forecasts

    International Nuclear Information System (INIS)

    Bremen, Lueder von

    2007-01-01

    Large-scale wind farms will play an important role in the future worldwide energy supply. However, with increasing wind power penetration all stakeholders on the electricity market will ask for more skilful wind power predictions regarding save grid integration and to increase the economic value of wind power. A Neural Network is used to calculate Model Output Statistics (MOS) for each individual forecast model (ECMWF and HIRLAM) and to model the aggregated power curve of the Middelgrunden offshore wind farm. We showed that the combination of two NWP models clearly outperforms the better single model. The normalized day-ahead RMSE forecast error for Middelgrunden can be reduced by 1% compared to single ECMWF. This is a relative improvement of 6%. For lead times >24h it is worthwhile to use a more sophisticated model combination approach than simple linear weighting. The investigated principle component regression is able to extract the uncorrelated information from two NWP forecasts. The spread of Ensemble Predictions is related to the skill of wind power forecasts. Simple contingency diagrams show that low spread corresponds is more often related to low forecast errors and high spread to large forecast errors

  17. Combining the AFLOW GIBBS and elastic libraries to efficiently and robustly screen thermomechanical properties of solids

    Science.gov (United States)

    Toher, Cormac; Oses, Corey; Plata, Jose J.; Hicks, David; Rose, Frisco; Levy, Ohad; de Jong, Maarten; Asta, Mark; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    2017-06-01

    Thorough characterization of the thermomechanical properties of materials requires difficult and time-consuming experiments. This severely limits the availability of data and is one of the main obstacles for the development of effective accelerated materials design strategies. The rapid screening of new potential materials requires highly integrated, sophisticated, and robust computational approaches. We tackled the challenge by developing an automated, integrated workflow with robust error-correction within the AFLOW framework which combines the newly developed "Automatic Elasticity Library" with the previously implemented GIBBS method. The first extracts the mechanical properties from automatic self-consistent stress-strain calculations, while the latter employs those mechanical properties to evaluate the thermodynamics within the Debye model. This new thermoelastic workflow is benchmarked against a set of 74 experimentally characterized systems to pinpoint a robust computational methodology for the evaluation of bulk and shear moduli, Poisson ratios, Debye temperatures, Grüneisen parameters, and thermal conductivities of a wide variety of materials. The effect of different choices of equations of state and exchange-correlation functionals is examined and the optimum combination of properties for the Leibfried-Schlömann prediction of thermal conductivity is identified, leading to improved agreement with experimental results than the GIBBS-only approach. The framework has been applied to the AFLOW.org data repositories to compute the thermoelastic properties of over 3500 unique materials. The results are now available online by using an expanded version of the REST-API described in the Appendix.

  18. Combinations of complex dynamical systems

    CERN Document Server

    Pilgrim, Kevin M

    2003-01-01

    This work is a research-level monograph whose goal is to develop a general combination, decomposition, and structure theory for branched coverings of the two-sphere to itself, regarded as the combinatorial and topological objects which arise in the classification of certain holomorphic dynamical systems on the Riemann sphere. It is intended for researchers interested in the classification of those complex one-dimensional dynamical systems which are in some loose sense tame. The program is motivated by the dictionary between the theories of iterated rational maps and Kleinian groups.

  19. Combining Alphas via Bounded Regression

    Directory of Open Access Journals (Sweden)

    Zura Kakushadze

    2015-11-01

    Full Text Available We give an explicit algorithm and source code for combining alpha streams via bounded regression. In practical applications, typically, there is insufficient history to compute a sample covariance matrix (SCM for a large number of alphas. To compute alpha allocation weights, one then resorts to (weighted regression over SCM principal components. Regression often produces alpha weights with insufficient diversification and/or skewed distribution against, e.g., turnover. This can be rectified by imposing bounds on alpha weights within the regression procedure. Bounded regression can also be applied to stock and other asset portfolio construction. We discuss illustrative examples.

  20. Radiation, chemicals and combined effects

    International Nuclear Information System (INIS)

    Sinclair, W.K.

    1991-01-01

    A brief background has been provided on current carcinogenic risks from ionizing radiation and their magnitude in background circumstances. The magnitude of the risks from possibly carcinogenic chemicals at background levels in air, water and food are surprisingly similar. The exception is, perhaps, for the single source of radon which, while variable, on the average stands out above all other sources. Some basic principles concerning the interaction of combined radiation and chemicals and some practical examples where the two interact synergistically to enhance radiation effects has also been provided. Areas for human research in the future are discussed. (Author)

  1. [Combination chemotherapy of experimental leukemia].

    Science.gov (United States)

    Emanuel', N M; Konovalova, N P; D'iachkovskaia, R F

    1977-01-01

    In the present work an attempt was made to gain greater therapeutic effect of diazane coupled with adriamycin and sarcolysin. Leucemias L-1210 and La served as a model. In leucosis La diazane was injected once in 5 days. Either an additional injection of adriamycin two days prior to diazane injection or sarcolysin injected simultaneously with diazane enabled the authors to obtain a distinct synergestic effect. In leucemia L-1210 a simultaneous administration of diazane and sarcolysin also contributes to considerably longer survival of leucemic animals. Such combinations are likely to be promising in their clinical use.

  2. Malignant peritoneal pseudomyxona: Combined treatment

    International Nuclear Information System (INIS)

    Martin, A.; Alvarado, E.; Marcos, A.; Palacios, E.; Gomez, A.

    1993-01-01

    We describe a special treatment for the malignant peritoneal pseudomyxoma as suggested by Sugarbaker. Shortly, it is a combination of surgical cytoreduction with a curative aim, completed with inmediate postoperative intraperitoneal chemotherapy. Having in mind the lack of metastasic danger of these tumours, as well as its lack of infiltrative character, by this surgical technique which consists in five different ''peritonectomies'', one may be able to free the patient of macroscopic tumour. The additional intraperitoneal chemotherapy might contribute to increase the survival of these patients and, perhaps, even to cure them. (Author) 12 refs

  3. Strategies for combinational cancer therapies

    International Nuclear Information System (INIS)

    Khleif, Samir

    2014-01-01

    The countless pre-clinical studies and many clinical trials that have applied tumor antigen-based therapies for the cancer treatment, and although the necessary tumor-specific immune response may be elicited in tumor-bearing hosts, this was not sufficient for the positive therapeutic outcome since there are multiple mechanisms that tumors develop to escape immune surveillance. The tumor-mediated inhibitory mechanisms involve co-inhibitory receptor-ligand interactions, such as PD-1/ PD-L1, secretion of inhibitory molecules, such as TGFb, and recruitment of suppressive cells, such as regulatory T cells (Treg), myeloid derived suppressor cells (MDSC), etc. Therefore, we hypothesized that successful cancer immunotherapy requires not only induction and enhancement of effector immune response but also simultaneous targeting of suppressor arm of immune system, thus in addition to enhancing antigen-specific immunity using vaccines or radiation therapy, one should also target tumor-mediated immune suppression to improve the overall efficacy of therapy. We developed multiple strategies to target various tumor-mediated immune inhibitory mechanisms that can enhance anti-tumor immunity and restructure tumor microenvironment to allow effector cells generated due to vaccination or radiation therapy to function potently. We evaluated the immune and therapeutic efficacy of multiple combinational therapies, including blocking and agonist antibodies to co-inhibitory/co-stimulatory molecules, such as PD-1, PD-L1, OX40, CTLA-4, GITR, inhibitors and neutralizing antibodies to inhibitory cytokines/molecules, such as IL-10, TGFb, IDO, and small molecules for selective inhibition of Tregs. In addition to evaluation of anti-tumor efficacy we are also investigated cellular and molecular mechanisms of action for these agents when combined with vaccine or radiation therapy and exploring the interactions between compounds within combinational therapies in animal tumor models. We are

  4. Radiation tolerant combinational logic cell

    Science.gov (United States)

    Maki, Gary R. (Inventor); Gambles, Jody W. (Inventor); Whitaker, Sterling (Inventor)

    2009-01-01

    A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.

  5. [Familial combined hyperlipidemia: consensus document].

    Science.gov (United States)

    Mata, Pedro; Alonso, Rodrigo; Ruíz-Garcia, Antonio; Díaz-Díaz, Jose L; González, Noemí; Gijón-Conde, Teresa; Martínez-Faedo, Ceferino; Morón, Ignacio; Arranz, Ezequiel; Aguado, Rocío; Argueso, Rosa; Perez de Isla, Leopoldo

    2014-10-01

    Familial combined hyperlipidemia (FCH) is a frequent disorder associated with premature coronary artery disease. It is transmitted in an autosomal dominant manner, although there is not a unique gene involved. The diagnosis is performed using clinical criteria, and variability in lipid phenotype and family history of hyperlipidemia are necessaries. Frequently, the disorder is associated with type2 diabetes mellitus, arterial hypertension and central obesity. Patients with FCH are considered as high cardiovascular risk and the lipid target is an LDL-cholesterol <100mg/dL, and <70mg/dL if cardiovascular disease or type 2 diabetes are present. Patients with FCH require lipid lowering treatment using potent statins and sometimes, combined lipid-lowering treatment. Identification and management of other cardiovascular risk factors as type 2 diabetes and hypertension are fundamental to reduce cardiovascular disease burden. This document gives recommendations for the diagnosis and global treatment of patients with FCH directed to specialists and general practitioners. Copyright © 2014 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.

  6. Combined trauma in peaceful time

    Directory of Open Access Journals (Sweden)

    Chaika V.A.

    2014-06-01

    Full Text Available In the article epidemiological features of combined trauma (CT, characteristic for the industrial region were summarized. 486 cases of CT were analyzed for the period from 2010 to 2012. Male patients dominated. 267 (54.9% patients were the age from 25 to 44 years. Most often the damage occurred in 2 anatomic regions (AR - 224 (46.1%, 3 AR - 177 (36.4% and 4 or more - 85 (17.5%. Trau¬matic brain injury - 94.2%, skeletal trauma - 70.6%, the trauma of the chest and abdomen - 68.4% and 35.7%, respectively prevailed. Injury of the abdominal cavity as a dominant one - 148 (30.5% occupied the first place. In 17 (3.5% cases it was impossible to establish the dominant damage. Mortality rate was directly dependent on the type of the trauma and patient's age. Maximum values were found in the combined brain injury and that of abdominal organs - 28.6%, as well as in the group of patients older than 60 years - 35.1%. From 2010 to 2012 the overall mortality decreased by 3.5%.

  7. Combining data in non-destructive testing; Fusion de donnees en CND pour le projet pace

    Energy Technology Data Exchange (ETDEWEB)

    Lavayssiere, B

    1994-03-01

    Non-destructive testing of some components requires quite often the use of several methods such as X-ray, ultrasonics, Eddy Currents. But the efficiency of a NDT method is highly dependent on the fact that the detectability of flaws in a specimen relies on the choice of the best method. Moreover a lot of inspection issues could benefit from the use of more than one test method, as each NDT method has its own physical properties and technological limits. Some questions still remain: how to combine data, at what level and for what functionality. Simple monomethod processes are well-known now. They include techniques like reconstruction which belongs to the so-called ill-posed problems in the field of mathematics. For NDT data processing, it has the ability to estimate real data from distorted ones coming from a probe. But, up to now there has been very few approaches for computer aided combination of results from different advanced techniques. This report presents the various mathematical fields involved towards that goal (statistical decision theory which allows the use of multiple hypothesis, non-linear decision theory for its capability to classify and to discriminate, graph theory to find the optimal path in an hypothesis graph and also fuzzy logic, multiple resolution analysis, artificial intelligence,...) and which combinations of methods are useful. Some images will illustrate this topic in which EDF is involved, and will explain what are the major goals of this work. Combining is not only an improvement of 3D visualisation which would allow to display simultaneously CAD or NDT data for example, but it consists in exploiting multisensor data collected via a variety of sophisticated techniques and presenting this information to the operator without overloading the operator/system capacities in order to reduce the uncertainty and to resolve the ambiguity inherent to mono method inspection. (author). 7 figs., 35 refs.

  8. Combining Renewable Energy With Coal

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-01

    There are various possibilities for incorporating biomass into coal-fuelled processes and a number of these are already being deployed commercially. Others are the focus of ongoing research and development. Biomass materials can vary widely, although the present report concentrates mainly on the use of woody biomass in the form of forest residues. Potentially, large amounts are available in some parts of the world. However, not all forested regions are very productive, and the degree of commercial exploitation varies considerably between individual countries. The level of wastage associated with timber production and associated downstream processing is frequently high and considerable quantities of potentially useful materials are often discarded. Overall, forest residues are a largely underexploited resource. Combining the use of biomass with coal can be beneficial, particularly from an environmental standpoint, although any such process may have its limitations or drawbacks. Each coal type and biomass feedstock has different characteristics although by combining the two, it may be possible to capitalise on the advantages of each, and minimise their individual disadvantages. An effective way is via cogasification, and useful operating experience has been achieved in a number of large-scale coal-fuelled gasification and IGCC plants. Cogasification can be the starting point for producing a range of products that include synthetic natural gas, chemicals, fertilisers and liquid transport fuels. It also has the potential to form the basis of systems that combine coal and biomass use with other renewable energy technologies to create clean, efficient energy-production systems. Thus, various hybrid energy concepts, some based on coal/biomass cogasification, have been proposed or are in the process of being developed or trialled. Some propose to add yet another element of renewable energy to the system, generally by incorporating electricity generated by intermittent

  9. Combined Shape and Topology Optimization

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman

    Shape and topology optimization seeks to compute the optimal shape and topology of a structure such that one or more properties, for example stiffness, balance or volume, are improved. The goal of the thesis is to develop a method for shape and topology optimization which uses the Deformable...... Simplicial Complex (DSC) method. Consequently, we present a novel method which combines current shape and topology optimization methods. This method represents the surface of the structure explicitly and discretizes the structure into non-overlapping elements, i.e. a simplicial complex. An explicit surface...... representation usually limits the optimization to minor shape changes. However, the DSC method uses a single explicit representation and still allows for large shape and topology changes. It does so by constantly applying a set of mesh operations during deformations of the structure. Using an explicit instead...

  10. How rats combine temporal cues.

    Science.gov (United States)

    Guilhardi, Paulo; Keen, Richard; MacInnis, Mika L M; Church, Russell M

    2005-05-31

    The procedures for classical and operant conditioning, and for many timing procedures, involve the delivery of reinforcers that may be related to the time of previous reinforcers and responses, and to the time of onsets and terminations of stimuli. The behavior resulting from such procedures can be described as bouts of responding that occur in some pattern at some rate. A packet theory of timing and conditioning is described that accounts for such behavior under a wide range of procedures. Applications include the food searching by rats in Skinner boxes under conditions of fixed and random reinforcement, brief and sustained stimuli, and several response-food contingencies. The approach is used to describe how multiple cues from reinforcers and stimuli combine to determine the rate and pattern of response bouts.

  11. Factorized combinations of Virasoro characters

    International Nuclear Information System (INIS)

    Bytsko, A.G.; Fring, A.

    2000-01-01

    We investigate linear combinations of characters for minimal Virasoro models which are representable as a product of several basic blocks. Our analysis is based on consideration of asymptotic behaviour of the characters in the quasi-classical limit. In particular, we introduce a notion of the secondary effective central charge. We find all possible cases for which factorization occurs on the base of the Gauss-Jacobi or the Watson identities. Exploiting these results, we establish various types of identities between different characters. In particular, we present several identities generalizing the Rogers-Ramanujan identities. Applications to quasi-particle representations, modular invariant partition functions, super-conformal theories and conformal models with boundaries are briefly discussed. (orig.)

  12. Combining Narrative and Numerical Simulation

    DEFF Research Database (Denmark)

    Hansen, Mette Sanne; Ladeby, Klaes Rohde; Rasmussen, Lauge Baungaard

    2011-01-01

    for decision makers to systematically test several different outputs of possible solutions in order to prepare for future consequences. The CSA can be a way to evaluate risks and address possible unforeseen problems in a more methodical way than either guessing or forecasting. This paper contributes...... to the decision making in operations and production management by providing new insights into modelling and simulation based on the combined narrative and numerical simulation approach as a tool for strategy making. The research question asks, “How can the CSA be applied in a practical context to support strategy...... making?” The paper uses a case study where interviews and observations were carried out in a Danish corporation. The CSA is a new way to address decision making and has both practical value and further expands the use of strategic simulation as a management tool....

  13. Severe combined immune deficiency syndrome

    International Nuclear Information System (INIS)

    Saleem, A.F.; Khawaja, R.D.A.; Shaikh, A.S.; Ali, S.A.; Zaidi, A.K.M.

    2013-01-01

    To determine the clinico-demographic features and laboratory parameters of children with severe combined immunodeficiency (SCID). Study Design: Case series. Place and Duration of Study: Department of Paediatrics and Child Health, the Aga Khan University, Karachi, from July 2006 to July 2011. Methodology: Thirteen infants who were discharged with a diagnosis of SCID were inducted in the study. Their clinicodemographic features and laboratory parameters were determined. Descriptive statistics has been used for computing frequency and percentage. Results: The median age at diagnosis was five months; 5 infants presented within 3 months of life. Three-fourth (77%) were males. Most of the infants were severely malnourished (85%) at the time of presentation. More than two-thirds (69%) were products of consanguineous marriages. All subjects had severe lymphopenia (absolute lymphocyte count (ALC) ranging between 170 – 2280) and low T and B lymphocyte counts. Conclusion: SCID should be considered in infants presenting with severe and recurrent infections. Low ALC (< 2500/mm3), is a reliable diagnostic feature of SCID. These infants should be promptly referred to a facility where stem cell transplant can be done. (author)

  14. Efficient Web Services Policy Combination

    Science.gov (United States)

    Vatan, Farrokh; Harman, Joseph G.

    2010-01-01

    Large-scale Web security systems usually involve cooperation between domains with non-identical policies. The network management and Web communication software used by the different organizations presents a stumbling block. Many of the tools used by the various divisions do not have the ability to communicate network management data with each other. At best, this means that manual human intervention into the communication protocols used at various network routers and endpoints is required. Developing practical, sound, and automated ways to compose policies to bridge these differences is a long-standing problem. One of the key subtleties is the need to deal with inconsistencies and defaults where one organization proposes a rule on a particular feature, and another has a different rule or expresses no rule. A general approach is to assign priorities to rules and observe the rules with the highest priorities when there are conflicts. The present methods have inherent inefficiency, which heavily restrict their practical applications. A new, efficient algorithm combines policies utilized for Web services. The method is based on an algorithm that allows an automatic and scalable composition of security policies between multiple organizations. It is based on defeasible policy composition, a promising approach for finding conflicts and resolving priorities between rules. In the general case, policy negotiation is an intractable problem. A promising method, suggested in the literature, is when policies are represented in defeasible logic, and composition is based on rules for non-monotonic inference. In this system, policy writers construct metapolicies describing both the policy that they wish to enforce and annotations describing their composition preferences. These annotations can indicate whether certain policy assertions are required by the policy writer or, if not, under what circumstances the policy writer is willing to compromise and allow other assertions to take

  15. Use of TCO as splitter in the optical splitting system for solar cells combination: a simulation study

    Science.gov (United States)

    Ayala-Mató, F.; Seuret-Jiménez, D.; Vigil-Galán, O.; Escobedo Alatorre, J. J.

    2017-10-01

    Transparent conducting oxides (TCOs) are evaluated as optical splitters in combined single thin film solar cells by using theoretical considerations. The optical properties of TCOs (transmittance and reflectance) are calculated using the Drude theory for free carriers. To improve the overall efficiency of the combined solar cells, the optical properties of the TCOs are studied as a function of the electron concentration and thickness, to obtain the best fit with the external quantum efficiency (EQE) of the solar cells in each case. The optimum values of the above parameters are obtained by applying a modified version of the Hooke-Jeeves method. To validate the proposal of the use of a TCO as the splitter, the short circuit current is calculated for several combined solar cell systems and the results are compared with those obtained using more sophisticated and expensive splitters, reported in the literature. The experimental results using a commercial TCO are presented, to verify the validity and feasibility of the novel concept.

  16. Combined modalities: chemotherapy/radiotherapy. Meeting summary

    International Nuclear Information System (INIS)

    Phillips, T.L.

    1979-01-01

    The effects of combined modalities, the standardization of terminology, the mechanisms of chemotherapeutic interactions with radiation and responses of normal and tumor systems are summarized from information presented at the Conference on Combined Modalities

  17. 27 CFR 6.93 - Combination packaging.

    Science.gov (United States)

    2010-04-01

    ..., DEPARTMENT OF THE TREASURY LIQUORS âTIED-HOUSEâ Exceptions § 6.93 Combination packaging. The act by an industry member of packaging and distributing distilled spirits, wine, or malt beverages in combination...

  18. Combining multiple classifiers for age classification

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-11-01

    Full Text Available The authors compare several different classifier combination methods on a single task, namely speaker age classification. This task is well suited to combination strategies, since significantly different feature classes are employed. Support vector...

  19. Systemic combination treatment for psoriasis: a review

    DEFF Research Database (Denmark)

    Jensen, Peter; Skov, Lone; Zachariae, Claus

    2010-01-01

    exist for the use of systemic combination therapy. Therefore, our aim was to review the current literature on systemic anti-psoriatic combination regimens. We searched PubMed and identified 98 papers describing 116 studies (23 randomized) reporting on the effect of various systemic combination...

  20. eCOMPAGT – efficient Combination and Management of Phenotypes and Genotypes for Genetic Epidemiology

    Directory of Open Access Journals (Sweden)

    Specht Günther

    2009-05-01

    Full Text Available Abstract Background High-throughput genotyping and phenotyping projects of large epidemiological study populations require sophisticated laboratory information management systems. Most epidemiological studies include subject-related personal information, which needs to be handled with care by following data privacy protection guidelines. In addition, genotyping core facilities handling cooperative projects require a straightforward solution to monitor the status and financial resources of the different projects. Description We developed a database system for an efficient combination and management of phenotypes and genotypes (eCOMPAGT deriving from genetic epidemiological studies. eCOMPAGT securely stores and manages genotype and phenotype data and enables different user modes with different rights. Special attention was drawn on the import of data deriving from TaqMan and SNPlex genotyping assays. However, the database solution is adjustable to other genotyping systems by programming additional interfaces. Further important features are the scalability of the database and an export interface to statistical software. Conclusion eCOMPAGT can store, administer and connect phenotype data with all kinds of genotype data and is available as a downloadable version at http://dbis-informatik.uibk.ac.at/ecompagt.

  1. Applicability and costs of nanofiltration in combination with photocatalysis for the treatment of dye house effluents

    Directory of Open Access Journals (Sweden)

    Wolfgang M. Samhaber

    2014-04-01

    Full Text Available Nanofiltration (NF is a capable method for the separation of dyes, which can support and even improve the applicability of photocatalysis in effluent-treatment processes. The membrane process usually will need a special pre-treatment to avoid precipitation and fouling on the membrane surface. Conceptually NF can be applied in the pre-treatment prior to the catalytic reactor or in connection with the reactor to separate the liquid phase from the reaction system and to recycle finely suspended catalysts and/or organic compounds. When concerning such reaction systems on a bigger scale, cost figures will prove the usefulness of those concepts. Different applications of photocatalysis on the lab-scale have been published in recent years. Membrane technology is used almost in all those processes and an overview will be given of those recently published systems that have been reported to be potentially useful for a further scale-up. NF membranes are mostly used for the more sophisticated separation step of these processes and the additional costs of the NF treatment, without any associated equipments, will be described and illustrated. The total specific costs of industrial NF treatment processes in usefully adjusted and designed plants range from 1 to 6 US$/m3 treated effluent. Combination concepts will have a good precondition for further development and upscaling, if the NF costs discussed here in detail will be, together with the costs of photocatalysis, economically acceptable.

  2. Acharya Nachiketa Multi-model ensemble schemes for predicting ...

    Indian Academy of Sciences (India)

    during pre-monsoon season: A case study based on sat- ellite data and regional climate model. 269. Anand R ... Development of regional wheat VI-LAI models using. Resourcesat-1 .... Impact of additional surface observation network on short range .... 795. Sahu P. Threat of land subsidence in and around Kolkata City.

  3. Multi-model ensemble schemes for predicting northeast monsoon ...

    Indian Academy of Sciences (India)

    drought occurred. Some of these are extreme north- east monsoon years with significantly less rain- fall (1982, 1988, 1989 and 2005), and in some years, more than normal rainfall occurred (1987,. 1993, 1996, 1997 and 1998). Some of these typ- ical years may also be characterized as El Ni˜no year (1987), La Ni˜na year ...

  4. PADF RF Localization Criteria for Multi-Model Scattering Environments

    Science.gov (United States)

    2011-04-01

    Raul Ordonez b, Atindra Mitra c aDepartment of Electrical Engineering, Louisiana Tech University, Ruston, LA 71272 bDepartment of Electrical and...21] April Johnson, Cara Rupp, Brad Wolf, Lang Hong, Atindra Mitra, “Collision-Avoidance Radar for Bicyclist and Runners,” 2010 IEEE National Aerospace and Electronics Conference, 14-16 July 2010, Dayton, Ohio

  5. Process based unification for multi-model software process improvement

    NARCIS (Netherlands)

    Kelemen, Z.D.

    2013-01-01

    Many different quality approaches are available in the software industry. Some of the ap-proaches, such as ISO 9001 are not software specific, i.e. they define general requirements for an organization and they can be used at any company. Others, such as Automotive SPICE have been derived from a

  6. Modeling Uncertainty in Climate Change: A Multi-Model Comparison

    Energy Technology Data Exchange (ETDEWEB)

    Gillingham, Kenneth; Nordhaus, William; Anthoff, David; Blanford, Geoffrey J.; Bosetti, Valentina; Christensen, Peter; McJeon, Haewon C.; Reilly, J. M.; Sztorc, Paul

    2015-10-01

    The economics of climate change involves a vast array of uncertainties, complicating both the analysis and development of climate policy. This study presents the results of the first comprehensive study of uncertainty in climate change using multiple integrated assessment models. The study looks at model and parametric uncertainties for population, total factor productivity, and climate sensitivity and estimates the pdfs of key output variables, including CO2 concentrations, temperature, damages, and the social cost of carbon (SCC). One key finding is that parametric uncertainty is more important than uncertainty in model structure. Our resulting pdfs also provide insight on tail events.

  7. DTPA: Bis benzimidazole as multi model imaging agent

    International Nuclear Information System (INIS)

    Srivastava, Vikas; Tiwari, A.K.; Sharma, H.; Sharma, R.; Mishra, A.K.

    2010-01-01

    Full text: The DTPA bis benzimidazole analogue has been tested for radiopharmaceutical efficacy. The radiolabelling was found more then 98% after 8 hrs and blood kinetics was fast. The compound was also tested for optical imaging agent. The Eu 3+ ion has an absorption band in the visible spectrum (578-582 nm) whose wavelength is very sensitive to even small changes in the coordination environment. Although the intensity of this 7F0 → 5D0 transition is low, the bands are relatively narrow, which allows distinguishing different coordination states of the metal. For Eu 3+ complexes which have two differently hydrated forms in aqueous solution, one observes two absorption bands belonging to the two species. High-resolution UV-visible spectra were recorded in aqueous solutions which show a temperature invariant absorption with two distinct, temperature-dependent absorption bands. The intensity ratio of these two bands changes with temperature: the band at shorter wavelengths is decreasing very slightly, while that at longer wavelengths is increasing with the temperature. The ratio of the integrals of the two bands is related to the equilibrium constant, and its temperature dependence yields the reaction enthalpy and entropy

  8. Binocular Combination of Second-Order Stimuli

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  9. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    Science.gov (United States)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples

  10. Combined use of infrared and hard X-ray microprobes for spectroscopy-based neuroanatomy

    Science.gov (United States)

    Surowka, A. D.; Ziomber, A.; Czyzycki, M.; Migliori, A.; Pieklo, L.; Kasper, K.; Szczerbowska-Boruchowska, M.

    2018-05-01

    Understanding the pathological triggers that affect the structural and physiological integrity of biochemical milieu of neurons is crucial to extend our knowledge on brain disorders, that are in many circumstances hardly treatable. Over recently, by using sophisticated hyperspectral micro-imaging modalities, it has been placed within our reach to get an insight into high fidelity histological details along with corresponding biochemical information in a label-free fashion, without using any additional chemical fixatives. However, in order to push forwards extensive application of these methods in the clinical arena, it is viable to make further iterations in novel data analysis protocols in order to boost their sensitivity. Therefore, in our study we proposed a new combined approach utilizing both benchtop Fourier transform infrared (FTIR) and synchrotron X-ray fluorescence (SR-XRF) micro-spectroscopies coupled with multivariate data clustering using the K-means algorithm for combined molecular and elemental micro-imaging, so that these complimentary analytical tools could be used for delineating between various brain structures based on their biochemical composition. By utilizing mid-IR transmission FTIR experiments, the biochemical composition in terms of lipids, proteins and phosphodiesters became accessible. In turn, the SR-XRF experiment was carried out at the advanced IAEA X-ray spectrometry station at Elettra Sincrotrone Trieste. By measuring in vacuum and by using the primary exciting X-ray beam, monochromatized to 10.5 keV, we took advantage of accessing the characteristic X-ray lines of a variety of elements ranging from carbon to zinc. Herein, we can report that the developed methodology has high specificity for label-free discriminating between lipid- and protein-rich brain tissue areas.

  11. Combined eye-atmosphere visibility model

    Science.gov (United States)

    Kaufman, Y. J.

    1981-01-01

    Existing models of the optical characteristics of the eye are combined with a recent model of optical characteristics of the atmosphere given by its modulation transfer function. This combination results in the combined eye-atmosphere performance given by the product of their modulation transfer functions. An application for the calculation of visibility thresholds in the case of a two-halves field is given.

  12. GLONASS orbit/clock combination in VNIIFTRI

    Science.gov (United States)

    Bezmenov, I.; Pasynok, S.

    2015-08-01

    An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.

  13. Systemic combination treatment for psoriasis: a review

    DEFF Research Database (Denmark)

    Jensen, Peter; Skov, Lone; Zachariae, Claus

    2010-01-01

    Psoriasis is a chronic inflammatory skin disease, which affects approximately 2.6% of the population in Northern Europe and Scandinavia. In order to achieve disease control, combinations of systemic treatments are sometimes needed for variable time periods. However, no evidence-based guidelines...... exist for the use of systemic combination therapy. Therefore, our aim was to review the current literature on systemic anti-psoriatic combination regimens. We searched PubMed and identified 98 papers describing 116 studies (23 randomized) reporting on the effect of various systemic combination...

  14. Radiation and combined heat transfer in channels

    International Nuclear Information System (INIS)

    Tamonis, M.

    1986-01-01

    This book presents numerical methods of calculation of radiative and combined heat transfer in channel flows of radiating as well as nonradiating media. Results obtained in calculations for flow conditions of combustion products from organic fuel products are given and methods used in determining the spectral optical properties of molecular gases are analyzed. The book presents applications of heat transfer in solving problems. Topic covered are as follows: optical properties of molecular gases; transfer equations for combined heat transfer; experimental technique; convective heat transfer in heated gas flows; radiative heat transfer in gaseous media; combined heat transfer; and radiative and combined heat transfer in applied problems

  15. 12 CFR 5.33 - Business combinations.

    Science.gov (United States)

    2010-01-01

    ... with safe and sound banking practices. (iii) Money laundering. The OCC considers the effectiveness of any insured depository institution involved in the business combination in combating money laundering...

  16. Preservation of semi-perishable food and development of convenience food using a combination of irradiation and other physicochemical treatments

    International Nuclear Information System (INIS)

    Choudhury, N.; Siddiqui, A.K.; Chowdhury, N.A.; Youssouf, Q.M.; Rashid, H.; Begum, A.A.; Alam, M.K.

    1998-01-01

    Studies were carried out on the development and irradiation preservation of semi-dried fish, e.g. Labeo rohita (Ruhi) and Cirrhiuas mrigala (Mrigel), the extension of shelf-life at ambient temperature, and the improvement in the microbiological quality of sealed, ready to eat, commercially prepared fish kebabs by a combination of gamma irradiation with spices and an acidulant such as ascorbic acid. In the processing of semi-dried fish, the combination treatment of a salt dip and irradiation at a dose of 4 kGy extended the shelf-life by more than 3 months. Kebabs prepared in the laboratory and irradiated at a dose of 5 kGy were found to have a shelf-life of up to 6 months at room temperature. With commercially prepared fish kebabs collected from ordinary and sophisticated food shops, the maximum shelf-life extension was 14 days for the 5 kGy treated samples stored at ambient temperature. The microbiological quality of such kebabs indicated that the fish used was of poor quality, resulting in a limited shelf-life, even after chemical and irradiation treatments. Inoculated pack studies of Clostridium botulinum spores showed that when oil fried, the kebab size had a definite effect on heat penetration, and consequent spore reduction. No spores were recovered from the 5 kGy irradiated fried kebabs. (author)

  17. Combined strategies for the improvement of heterologous ...

    African Journals Online (AJOL)

    use

    2011-12-14

    Dec 14, 2011 ... Full Length Research Paper. Combined ... Three different cultivation strategies had been compared for the production of .... biorector using combined fermentation strategies, and the ... shaking flask was chosen as a host for the transformation of SalI- linearized ... sterile filtered after bioreactor sterilization.

  18. 7 CFR 51.304 - Combination grades.

    Science.gov (United States)

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Apples Grades § 51.304 Combination grades. (a) Combinations of the above grades may...

  19. IPTV program recommendation based on combination strategies

    Directory of Open Access Journals (Sweden)

    Li Hao

    2018-01-01

    Full Text Available As a new interactive service technology, IPTV has been extensively studying in the field of TV pro-gram recommendation, but the sparse of the user-program rating matrix and the cold-start problem is a bottleneck that the program recommended accurately. In this paper, a flexible combination of two recommendation strategies proposed, which explored the sparse and cold-start problem as well as the issue of user interest change over time. This paper achieved content-based filtering section and collaborative filtering section according to the two combination strategies, which effectively solved the cold-start program and over the sparse problem and the problem of users interest change over time. The experimental results showed that this combinational recommendation system in optimal parameters compared by using any one of two combination strategies or not using any combination strategy at all, and the reducing range of MAE is [2.7%,3%].The increasing range of precision and recall is [13.8%95.5%] and [0,97.8%], respectively. The experiment showed better results when using combinational recommendation system in optimal parameters than using each combination strategies individually or not using any combination strategy.

  20. A linear combination of modified Bessel functions

    Science.gov (United States)

    Shitzer, A.; Chato, J. C.

    1971-01-01

    A linear combination of modified Bessel functions is defined, discussed briefly, and tabulated. This combination was found to recur in the analysis of various heat transfer problems and in the analysis of the thermal behavior of living tissue when modeled by cylindrical shells.

  1. Using sentence combining in technical writing classes

    Science.gov (United States)

    Rosner, M.; Paul, T.

    1981-01-01

    Sentence combining exercises are advanced as a way to teach technical writing style without reliance upon abstractions, from which students do not learn. Such exercises: (1) give students regular writing practice; (2) teach the logic of sentence structure, sentence editing, and punctuation; (3) paragraph development and organization; and (4) rhetorical stance. Typical sentence, paragraph, and discourse level sentence combining exercises are described.

  2. 7 CFR 29.1008 - Combination symbols.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Combination symbols. 29.1008 Section 29.1008..., 13, 14 and Foreign Type 92) § 29.1008 Combination symbols. A color or group symbol used with another symbol to form the third factor of a grademark to denote a particular side or characteristic of the...

  3. COMBINED-SEWER OVERFLOW CONTROL AND TREATMENT

    Science.gov (United States)

    Combined-sewer overflow (CSO), along with sanitary-sewer overflow and stormwater are significant contributors of contamination to surface waters. During a rain event, the flow in a combined sewer system may exceed the capacity of the intercepting sewer leading to the wastewater t...

  4. Combine material against electromagnetic pulse disturbance

    International Nuclear Information System (INIS)

    Liu Yan

    2004-01-01

    A novel combined material is introduced, which is hard against electromagnetic pulse disturbance, The attenuation characteristics and the penetration probability of the combine material is discussed in detail. The penetration probability of electromagnetic wave is calculated approximately and the characteristic curve is measured for this material. (authors)

  5. Economic and Accounting Issues Regarding Business Combination

    OpenAIRE

    Asalos Nicoleta; Georgescu Cristina Elena

    2011-01-01

    Some businesses units do not survives the competition and finally close the business. Hence the excessive competition became a very powerful cause of business combination. Elimination of competition means creating monopoly in the market. Adopting IFRS 3 was to improve the relevance, reliability and comparability of the information that a reporting entity provides in its financial statements about a business combination and its effects.

  6. On combining algorithms for deformable image registration

    NARCIS (Netherlands)

    Muenzing, S.E.A.; Ginneken, van B.; Pluim, J.P.W.; Dawant, B.M.

    2012-01-01

    We propose a meta-algorithm for registration improvement by combining deformable image registrations (MetaReg). It is inspired by a well-established method from machine learning, the combination of classifiers. MetaReg consists of two main components: (1) A strategy for composing an improved

  7. Multimedia: How to Combine Language and Visuals

    Directory of Open Access Journals (Sweden)

    Holger Horz

    2008-08-01

    Full Text Available In the last decade, advanced computer technology has allowed for development of information systems and learning environments that combine language with other forms of human communication in innovative ways. Language in the form of written texts, for example, can be combined not only with static pictures or graphs as in printed material, but also with animation or video.

  8. Investigations on combined injuries. Pt. 24

    International Nuclear Information System (INIS)

    Sedlmeier, H.; Lehner, K.; Werdan, K.; Messerschmidt, O.

    1979-01-01

    Combined injuries were inflicted upon NMRI-mice, hurting each with an open skin wound subsequent to sublethal exposition to X-rays. Lethality among animals with combined injuries was between 40 and 60%, while animals only irradiated and those with only a skin wound had lethalities between 10 and 20% and 0%, respectively. Blood circulation and respiration of animals injured with combined lesions were studied in an attempt to understand the cause of the high lethality in this group. The blood volume and the oxygen transport capacity were drastically reduced in animals with combined injuries as compared to those in animals only irradiated, although plasma volume, vascular permeability and distribution of the blood volume remained similar in both groups. Analyses of gases and acid/base composition of blood revealed neither respiratory nor metabolic acidosis. These findings indicate that combined injuries hardly impair blood circulation and respiratory function. (orig.) [de

  9. Data combinations accounting for LISA spacecraft motion

    International Nuclear Information System (INIS)

    Shaddock, Daniel A.; Tinto, Massimo; Estabrook, Frank B.; Armstrong, J.W.

    2003-01-01

    The laser interferometer space antenna is an array of three spacecraft in an approximately equilateral triangle configuration which will be used as a low-frequency gravitational wave detector. We present here new generalizations of the Michelson- and Sagnac-type time-delay interferometry data combinations. These combinations cancel laser phase noise in the presence of different up and down propagation delays in each arm of the array, and slowly varying systematic motion of the spacecraft. The gravitational wave sensitivities of these generalized combinations are the same as previously computed for the stationary cases, although the combinations are now more complicated. We introduce a diagrammatic representation to illustrate that these combinations are actually synthesized equal-arm interferometers

  10. Combination of the LEP II ffbar Results

    CERN Document Server

    Geweniger, C; Elsing, M; Goy, C; Holt, J; Liebig, W; Minard, M N; Renton, P B; Riemann, S; Sachs, K; Ward, P; Wynhoff, S

    2002-01-01

    Preliminary combinations of measurements of the 4 LEP collaborations of the process e+e-->ffbar at LEP-II are presented, using data from the full LEP-II data set where available. Cross-sections and forward-backward asymmetry measurements are combined for the full LEP-II data set. Combined differential cross-sections $\\frac{{\\rm d}\\sigma}{{\\rm d}\\cos\\theta}$ for electron-pairs, muon pair and tau-pair final states are presented. Measurements of the production of heavy flavours are combined. The combined results are interpreted in terms of contact interactions and the exchange Z' bosons and leptoquarks, and within models of low scale gravity in large extra dimensions.

  11. Combining Ideas in Crowdsourced Idea Generation

    Directory of Open Access Journals (Sweden)

    Wang Kai

    2017-02-01

    Full Text Available Collecting ideas through crowdsourcing has become a common practice for companies to benefit from external ideas and innovate. It is desirable that crowd members build on each other's ideas to achieve synergy. This study proposes and verifies a new method for idea combination which can result in combined ideas that are both novel and useful. The domain-specific knowledge of crowd members does not influence the effectiveness of such idea combination. The new method can be used for collecting highly creative ideas from the crowd. The implications for future research are discussed.

  12. Combined algorithms in nonlinear problems of magnetostatics

    International Nuclear Information System (INIS)

    Gregus, M.; Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.

    1988-01-01

    To solve boundary problems of magnetostatics in unbounded two- and three-dimensional regions, we construct combined algorithms based on a combination of the method of boundary integral equations with the grid methods. We study the question of substantiation of the combined method of nonlinear magnetostatic problem without the preliminary discretization of equations and give some results on the convergence of iterative processes that arise in non-linear cases. We also discuss economical iterative processes and algorithms that solve boundary integral equations on certain surfaces. Finally, examples of numerical solutions of magnetostatic problems that arose when modelling the fields of electrophysical installations are given too. 14 refs.; 2 figs.; 1 tab

  13. Studies of a Combined-Cycle Engine

    OpenAIRE

    苅田, 丈士; KANDA, Takeshi

    2003-01-01

    For a Single-Stage-to-Orbit (SSTO) aerospace plane (Fig.1), several engines will be necessary to reach orbit. The combined-cycle engine incorporates several operational modes in a single engine. Study of the combined cycle engine has a long history, and several kinds of such engines have been proposed and studied. When several engines are mounted on a vehicle, each engine of the system will show a performance higher than that of the combined cycle engine. However, during the operation of one ...

  14. Combination solar photovoltaic heat engine energy converter

    Science.gov (United States)

    Chubb, Donald L.

    1987-01-01

    A combination solar photovoltaic heat engine converter is proposed. Such a system is suitable for either terrestrial or space power applications. The combination system has a higher efficiency than either the photovoltaic array or the heat engine alone can attain. Advantages in concentrator and radiator area and receiver mass of the photovoltaic heat engine system over a heat-engine-only system are estimated. A mass and area comparison between the proposed space station organic Rankine power system and a combination PV-heat engine system is made. The critical problem for the proposed converter is the necessity for high temperature photovoltaic array operation. Estimates of the required photovoltaic temperature are presented.

  15. Memristive Perceptron for Combinational Logic Classification

    Directory of Open Access Journals (Sweden)

    Lidan Wang

    2013-01-01

    Full Text Available The resistance of the memristor depends upon the past history of the input current or voltage; so it can function as synapse in neural networks. In this paper, a novel perceptron combined with the memristor is proposed to implement the combinational logic classification. The relationship between the memristive conductance change and the synapse weight update is deduced, and the memristive perceptron model and its synaptic weight update rule are explored. The feasibility of the novel memristive perceptron for implementing the combinational logic classification (NAND, NOR, XOR, and NXOR is confirmed by MATLAB simulation.

  16. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  17. Cost Comparison of Conventional Gray Combined Sewer Overflow Control Infrastructure versus a Green/Gray Combination

    Science.gov (United States)

    This paper outlines a life-cycle cost analysis comparing a green (rain gardens) and gray (tunnels) infrastructure combination to a gray-only option to control combined sewer overflow in the Turkey Creek Combined Sewer Overflow Basin, in Kansas City, MO. The plan area of this Bas...

  18. Combining nonthermal technologies to control foodborne microorganisms.

    Science.gov (United States)

    Ross, Alexander I V; Griffiths, Mansel W; Mittal, Gauri S; Deeth, Hilton C

    2003-12-31

    Novel nonthermal processes, such as high hydrostatic pressure (HHP), pulsed electric fields (PEFs), ionizing radiation and ultrasonication, are able to inactivate microorganisms at ambient or sublethal temperatures. Many of these processes require very high treatment intensities, however, to achieve adequate microbial destruction in low-acid foods. Combining nonthermal processes with conventional preservation methods enhances their antimicrobial effect so that lower process intensities can be used. Combining two or more nonthermal processes can also enhance microbial inactivation and allow the use of lower individual treatment intensities. For conventional preservation treatments, optimal microbial control is achieved through the hurdle concept, with synergistic effects resulting from different components of the microbial cell being targeted simultaneously. The mechanisms of inactivation by nonthermal processes are still unclear; thus, the bases of synergistic combinations remain speculative. This paper reviews literature on the antimicrobial efficiencies of nonthermal processes combined with conventional and novel nonthermal technologies. Where possible, the proposed mechanisms of synergy is mentioned.

  19. Combined assessment (aspiration cytology and mammography) of ...

    African Journals Online (AJOL)

    Combined assessment (aspiration cytology and mammography) of clinically suspicious breast masses. W.F. van Wyk, D Dent, E Anne Hacking, Genevieve Learmonth, R.E. Kottler, C Anne Gudgeon, A Tiltman ...

  20. Advances in combination therapy of lung cancer

    DEFF Research Database (Denmark)

    Wu, Lan; Leng, Donglei; Cun, Dongmei

    2017-01-01

    Lung cancer is a complex disease caused by a multitude of genetic and environmental factors. The progression of lung cancer involves dynamic changes in the genome and a complex network of interactions between cancer cells with multiple, distinct cell types that form tumors. Combination therapy......, including small molecule drugs and biopharmaceuticals, which make the optimization of dosing and administration schedule challenging. This article reviews the recent advances in the design and development of combinations of pharmaceuticals for the treatment of lung cancer. Focus is primarily on rationales...... for the selection of specific combination therapies for lung cancer treatment, and state of the art of delivery technologies and dosage regimens for the combinations, tested in preclinical and clinical trials....