WorldWideScience

Sample records for exploitation model performance

  1. Exploiting partial knowledge for efficient model analysis

    OpenAIRE

    Macedo, Nuno; Cunha, Alcino; Pessoa, Eduardo José Dias

    2017-01-01

    The advancement of constraint solvers and model checkers has enabled the effective analysis of high-level formal specification languages. However, these typically handle a specification in an opaque manner, amalgamating all its constraints in a single monolithic verification task, which often proves to be a performance bottleneck. This paper addresses this issue by proposing a solving strategy that exploits user-provided partial knowledge, namely by assigning symbolic bounds to the problem’s ...

  2. Geometric saliency to characterize radar exploitation performance

    Science.gov (United States)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  3. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  4. Exploiting intrinsic fluctuations to identify model parameters.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.

  5. Main principles of developing exploitation models of semiconductor devices

    Science.gov (United States)

    Gradoboev, A. V.; Simonova, A. V.

    2018-05-01

    The paper represents primary tasks, solutions of which allow to develop the exploitation modes of semiconductor devices taking into account complex and combined influence of ionizing irradiation and operation factors. The structure of the exploitation model of the semiconductor device is presented, which is based on radiation and reliability models. Furthermore, it was shown that the exploitation model should take into account complex and combine influence of various ionizing irradiation types and operation factors. The algorithm of developing the exploitation model of the semiconductor devices is proposed. The possibility of creating the radiation model of Schottky barrier diode, Schottky field-effect transistor and Gunn diode is shown based on the available experimental data. The basic exploitation model of IR-LEDs based upon double AlGaAs heterostructures is represented. The practical application of the exploitation models will allow to output the electronic products with guaranteed operational properties.

  6. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  7. [Ecotourism exploitation model in Bita Lake Natural Reserve of Yunnan].

    Science.gov (United States)

    Yang, G; Wang, Y; Zhong, L

    2000-12-01

    Bita lake provincial natural reserve is located in Shangri-La region of North-western Yunnan, and was set as a demonstrating area for ecotourism exploitation in 1998. After a year's exploitation construction and half a year's operation as a branch of the 99' Kunming International Horticulture Exposition to accept tourists, it was proved that the ecotourism demonstrating area attained four integrated functions of ecotourism, i.e., tourism, protection, poverty clearing and environment education. Five exploitation and management models including function zoned exploitation model, featured tourism communication model signs system designing model, local Tibetan family reception model and environmental monitoring model, were also successful, which were demonstrated and spreaded to the whole province. Bita lake provincial natural reserve could be a good sample for the ecotourism exploitation natural reserves of the whole country.

  8. A fuzzy model for exploiting customer requirements

    Directory of Open Access Journals (Sweden)

    Zahra Javadirad

    2016-09-01

    Full Text Available Nowadays, Quality function deployment (QFD is one of the total quality management tools, where customers’ views and requirements are perceived and using various techniques improves the production requirements and operations. The QFD department, after identification and analysis of the competitors, takes customers’ feedbacks to meet the customers’ demands for the products compared with the competitors. In this study, a comprehensive model for assessing the importance of the customer requirements in the products or services for an organization is proposed. The proposed study uses linguistic variables, as a more comprehensive approach, to increase the precision of the expression evaluations. The importance of these requirements specifies the strengths and weaknesses of the organization in meeting the requirements relative to competitors. The results of these experiments show that the proposed method performs better than the other methods.

  9. Optimization Models for Petroleum Field Exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Jonsbraaten, Tore Wiig

    1998-12-31

    This thesis presents and discusses various models for optimal development of a petroleum field. The objective of these optimization models is to maximize, under many uncertain parameters, the project`s expected net present value. First, an overview of petroleum field optimization is given from the point of view of operations research. Reservoir equations for a simple reservoir system are derived and discretized and included in optimization models. Linear programming models for optimizing production decisions are discussed and extended to mixed integer programming models where decisions concerning platform, wells and production strategy are optimized. Then, optimal development decisions under uncertain oil prices are discussed. The uncertain oil price is estimated by a finite set of price scenarios with associated probabilities. The problem is one of stochastic mixed integer programming, and the solution approach is to use a scenario and policy aggregation technique developed by Rockafellar and Wets although this technique was developed for continuous variables. Stochastic optimization problems with focus on problems with decision dependent information discoveries are also discussed. A class of ``manageable`` problems is identified and an implicit enumeration algorithm for finding optimal decision policy is proposed. Problems involving uncertain reservoir properties but with a known initial probability distribution over possible reservoir realizations are discussed. Finally, a section on Nash-equilibrium and bargaining in an oil reservoir management game discusses the pool problem arising when two lease owners have access to the same underlying oil reservoir. Because the oil tends to migrate, both lease owners have incentive to drain oil from the competitors part of the reservoir. The discussion is based on a numerical example. 107 refs., 31 figs., 14 tabs.

  10. Exploring, exploiting and evolving diversity of aquatic ecosystem models

    DEFF Research Database (Denmark)

    Janssen, Annette B G; Arhonditsis, George B.; Beusen, Arthur

    2015-01-01

    Here, we present a community perspective on how to explore, exploit and evolve the diversity in aquatic ecosystem models. These models play an important role in understanding the functioning of aquatic ecosystems, filling in observation gaps and developing effective strategies for water quality...... management. In this spirit, numerous models have been developed since the 1970s. We set off to explore model diversity by making an inventory among 42 aquatic ecosystem modellers, by categorizing the resulting set of models and by analysing them for diversity. We then focus on how to exploit model diversity...... available through open-source policies, to standardize documentation and technical implementation of models, and to compare models through ensemble modelling and interdisciplinary approaches. We end with our perspective on how the field of aquatic ecosystem modelling might develop in the next 5–10 years...

  11. Exploiting communication concurrency on high performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Chaimov, Nicholas [Univ. of Oregon, Eugene, OR (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-01-01

    Although logically available, applications may not exploit enough instantaneous communication concurrency to maximize hardware utilization on HPC systems. This is exacerbated in hybrid programming models such as SPMD+OpenMP. We present the design of a "multi-threaded" runtime able to transparently increase the instantaneous network concurrency and to provide near saturation bandwidth, independent of the application configuration and dynamic behavior. The runtime forwards communication requests from application level tasks to multiple communication servers. Our techniques alleviate the need for spatial and temporal application level message concurrency optimizations. Experimental results show improved message throughput and bandwidth by as much as 150% for 4KB bytes messages on InfiniBand and by as much as 120% for 4KB byte messages on Cray Aries. For more complex operations such as all-to-all collectives, we observe as much as 30% speedup. This translates into 23% speedup on 12,288 cores for a NAS FT implemented using FFTW. We also observe as much as 76% speedup on 1,500 cores for an already optimized UPC+OpenMP geometric multigrid application using hybrid parallelism.

  12. Performance testing of LiDAR exploitation software

    Science.gov (United States)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-04-01

    Mobile LiDAR systems are being used widely in recent years for many applications in the field of geoscience. One of most important limitations of this technology is the large computational requirements involved in data processing. Several software solutions for data processing are available in the market, but users are often unknown about the methodologies to verify their performance accurately. In this work a methodology for LiDAR software performance testing is presented and six different suites are studied: QT Modeler, AutoCAD Civil 3D, Mars 7, Fledermaus, Carlson and TopoDOT (all of them in x64). Results depict as QTModeler, TopoDOT and AutoCAD Civil 3D allow the loading of large datasets, while Fledermaus, Mars7 and Carlson do not achieve these powerful performance. AutoCAD Civil 3D needs large loading time in comparison with the most powerful softwares such as QTModeler and TopoDOT. Carlson suite depicts the poorest results among all the softwares under study, where point clouds larger than 5 million points cannot be loaded and loading time is very large in comparison with the other suites even for the smaller datasets. AutoCAD Civil 3D, Carlson and TopoDOT show more threads than other softwares like QTModeler, Mars7 and Fledermaus.

  13. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  14. Exploiting similarity in turbulent shear flows for turbulence modeling

    Science.gov (United States)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  15. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2002-03-31

    The West Carney Field in Lincoln County, Oklahoma is one of few newly discovered oil fields in Oklahoma. Although profitable, the field exhibits several unusual characteristics. These include decreasing water-oil ratios, decreasing gas-oil ratios, decreasing bottomhole pressures during shut-ins in some wells, and transient behavior for water production in many wells. This report explains the unusual characteristics of West Carney Field based on detailed geological and engineering analyses. We propose a geological history that explains the presence of mobile water and oil in the reservoir. The combination of matrix and fractures in the reservoir explains the reservoir's flow behavior. We confirm our hypothesis by matching observed performance with a simulated model and develop procedures for correlating core data to log data so that the analysis can be extended to other, similar fields where the core coverage may be limited.

  16. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2005-02-01

    Hunton formation in Oklahoma has displayed some unique production characteristics. These include high initial water-oil and gas-oil ratios, decline in those ratios over time and temporary increase in gas-oil ratio during pressure build up. The formation also displays highly complex geology, but surprising hydrodynamic continuity. This report addresses three key issues related specifically to West Carney Hunton field and, in general, to any other Hunton formation exhibiting similar behavior: (1) What is the primary mechanism by which oil and gas is produced from the field? (2) How can the knowledge gained from studying the existing fields can be extended to other fields which have the potential to produce? (3) What can be done to improve the performance of this reservoir? We have developed a comprehensive model to explain the behavior of the reservoir. By using available production, geological, core and log data, we are able to develop a reservoir model which explains the production behavior in the reservoir. Using easily available information, such as log data, we have established the parameters needed for a field to be economically successful. We provide guidelines in terms of what to look for in a new field and how to develop it. Finally, through laboratory experiments, we show that surfactants can be used to improve the hydrocarbons recovery from the field. In addition, injection of CO{sub 2} or natural gas also will help us recover additional oil from the field.

  17. Exploration and Exploitation Fit and Performance in International Strategic Alliances

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Gudergan, Siegfried

    2012-01-01

    antithetical) strategies with different antecedents and performance consequences. Our results show that while competency similarity is conducive to upstream innovative performance, prior experience with the partner is potentially damaging for this type of performance and trust and cultural distance do not play...... significant roles. When the motive is efficiency and downstream market performance, prior experience with the partner instead is beneficial, as are high levels of trust and low levels of cultural distance. These findings have key implications for literature on strategic fit and alliance performance....

  18. Production performance and exploitation of heterosis in Cameroon ...

    African Journals Online (AJOL)

    A crossbreeding experiment using Cameroon indigenous (CF) and German Dahlem Red (GR) chickens was undertaken to determine production performance and heterosis estimates of body weight of cockerels and laying hens at various ages and and egg production traits in layers. Four genetic groups were involved, ...

  19. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  20. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  1. PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.

    Energy Technology Data Exchange (ETDEWEB)

    Czuchlewski, Kristina Rodriguez [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hart, William E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of human perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into

  2. Exploiting the Capture Effect to Enhance RACH Performance in Cellular-Based M2M Communications

    Directory of Open Access Journals (Sweden)

    Jonghun Kim

    2017-09-01

    Full Text Available Cellular-based machine-to-machine (M2M communication is expected to facilitate services for the Internet of Things (IoT. However, because cellular networks are designed for human users, they have some limitations. Random access channel (RACH congestion caused by massive access from M2M devices is one of the biggest factors hindering cellular-based M2M services because the RACH congestion causes random access (RA throughput degradation and connection failures to the devices. In this paper, we show the possibility exploiting the capture effects, which have been known to have a positive impact on the wireless network system, on RA procedure for improving the RA performance of M2M devices. For this purpose, we analyze an RA procedure using a capture model. Through this analysis, we examine the effects of capture on RA performance and propose an Msg3 power-ramping (Msg3 PR scheme to increase the capture probability (thereby increasing the RA success probability even when severe RACH congestion problem occurs. The proposed analysis models are validated using simulations. The results show that the proposed scheme, with proper parameters, further improves the RA throughput and reduces the connection failure probability, by slightly increasing the energy consumption. Finally, we demonstrate the effects of coexistence with other RA-related schemes through simulation results.

  3. Television production, Funding Models and Exploitation of Content

    OpenAIRE

    Doyle, Gillian

    2016-01-01

    The rise of digital platforms has transformative implications for strategies of financing media production and for exploitation of the economic value in creative content. In the television industry, changes in technologies for distribution and the emergence of SVOD services such as Netflix are gradually shifting audiences and financial power away from broadcasters while at the same time creating unprecedented opportunities for programme-makers.  Drawing on findings from recent RCUK-funded res...

  4. Exploiting Textured 3D Models for Developing Serious Games

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-08-01

    Full Text Available Digital technologies have affected significantly many fields of computer graphics such as Games and especially the field of the Serious Games. These games are usually used for educational proposes in many fields such as Health Care, Military applications, Education, Government etc. Especially Digital Cultural Heritage is a scientific area that Serious Games are applied and lately many applications appear in the related literature. Realistic 3D textured models which have been produced using different photogrammetric methods could be a useful tool for the creation of Serious Game applications in order to make the final result more realistic and close to the reality. The basic goal of this paper is how 3D textured models which are produced by photogrammetric methods can be useful for developing a more realistic environment of a Serious Game. The application of this project aims at the creation of an educational game for the Ancient Agora of Athens. The 3D models used vary not only as far as their production methods (i.e. Time of Flight laser scanner, Structure from Motion, Virtual historical reconstruction etc. is concerned, but also as far as their era as some of them illustrated according to their existing situation and some others according to how these monuments looked like in the past. The Unity 3D® game developing environment was used for creating this application, in which all these models were inserted in the same file format. For the application two diachronic virtual tours of the Athenian Agora were produced. The first one illustrates the Agora as it is today and the second one at the 2nd century A.D. Finally the future perspective for the evolution of this game is presented which includes the addition of some questions that the user will be able to answer. Finally an evaluation is scheduled to be performed at the end of the project.

  5. Exploiting Textured 3D Models for Developing Serious Games

    Science.gov (United States)

    Kontogianni, G.; Georgopoulos, A.

    2015-08-01

    Digital technologies have affected significantly many fields of computer graphics such as Games and especially the field of the Serious Games. These games are usually used for educational proposes in many fields such as Health Care, Military applications, Education, Government etc. Especially Digital Cultural Heritage is a scientific area that Serious Games are applied and lately many applications appear in the related literature. Realistic 3D textured models which have been produced using different photogrammetric methods could be a useful tool for the creation of Serious Game applications in order to make the final result more realistic and close to the reality. The basic goal of this paper is how 3D textured models which are produced by photogrammetric methods can be useful for developing a more realistic environment of a Serious Game. The application of this project aims at the creation of an educational game for the Ancient Agora of Athens. The 3D models used vary not only as far as their production methods (i.e. Time of Flight laser scanner, Structure from Motion, Virtual historical reconstruction etc.) is concerned, but also as far as their era as some of them illustrated according to their existing situation and some others according to how these monuments looked like in the past. The Unity 3D® game developing environment was used for creating this application, in which all these models were inserted in the same file format. For the application two diachronic virtual tours of the Athenian Agora were produced. The first one illustrates the Agora as it is today and the second one at the 2nd century A.D. Finally the future perspective for the evolution of this game is presented which includes the addition of some questions that the user will be able to answer. Finally an evaluation is scheduled to be performed at the end of the project.

  6. Television production, Funding Models and Exploitation of Content

    Directory of Open Access Journals (Sweden)

    Gillian Doyle

    2016-07-01

    Full Text Available The rise of digital platforms has transformative implications for strategies of financing media production and for exploitation of the economic value in creative content. In the television industry, changes in technologies for distribution and the emergence of SVOD services such as Netflix are gradually shifting audiences and financial power away from broadcasters while at the same time creating unprecedented opportunities for programme-makers.  Drawing on findings from recent RCUK-funded research, this article examines how these shifts are affecting production financing and the economics of supplying television content.  In particular, it focuses on how changes in the dynamics of rights markets and in strategic approaches towards the financing of television production might mean for markets, industries and for policies intended to support the economic sustainability of independent television content production businesses.

  7. Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process

    Science.gov (United States)

    Migawa, Klaudiusz

    2012-12-01

    The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.

  8. Exploiting Modelling and Simulation in Support of Cyber Defence

    NARCIS (Netherlands)

    Klaver, M.H.A.; Boltjes, B.; Croom-Jonson, S.; Jonat, F.; Çankaya, Y.

    2014-01-01

    The rapidly evolving environment of Cyber threats against the NATO Alliance has necessitated a renewed focus on the development of Cyber Defence policy and capabilities. The NATO Modelling and Simulation Group is looking for ways to leverage Modelling and Simulation experience in research, analysis

  9. Exploiting Instability: A Model for Managing Organizational Change.

    Science.gov (United States)

    Frank, Debra; Rocks, William

    In response to decreased levels of funding and declining enrollments, increased competition, and major technological advances, Allegany Community College, in Maryland, has developed a model for managing organizational change. The model incorporates the following four components for effective transition and change: conceptualization; communication;…

  10. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  11. Using CASE to Exploit Process Modeling in Technology Transfer

    Science.gov (United States)

    Renz-Olar, Cheryl

    2003-01-01

    A successful business will be one that has processes in place to run that business. Creating processes, reengineering processes, and continually improving processes can be accomplished through extensive modeling. Casewise(R) Corporate Modeler(TM) CASE is a computer aided software engineering tool that will enable the Technology Transfer Department (TT) at NASA Marshall Space Flight Center (MSFC) to capture these abilities. After successful implementation of CASE, it could then go on to be applied in other departments at MSFC and other centers at NASA. The success of a business process is dependent upon the players working as a team and continuously improving the process. A good process fosters customer satisfaction as well as internal satisfaction in the organizational infrastructure. CASE provides a method for business process success through functions consisting of systems and processes business models; specialized diagrams; matrix management; simulation; report generation and publishing; and, linking, importing, and exporting documents and files. The software has an underlying repository or database to support these functions. The Casewise. manual informs us that dynamics modeling is a technique used in business design and analysis. Feedback is used as a tool for the end users and generates different ways of dealing with the process. Feedback on this project resulted from collection of issues through a systems analyst interface approach of interviews with process coordinators and Technical Points of Contact (TPOCs).

  12. Analytically exploiting noise correlations inside the feedback loop to improve locked-oscillator performance

    CSIR Research Space (South Africa)

    Sastrawan, J

    2016-08-01

    Full Text Available (2016) Analytically exploiting noise correlations inside the feedback loop to improve locked-oscillator performance J. Sastrawan,1 C. Jones,1 I. Akhalwaya,2 H. Uys,2 and M. J. Biercuk1,* 1ARC Centre for Engineered Quantum Systems, School of Physics...) that probes and is locked to the atomic transition. The LO frequencymay evolve randomly in time due to intrinsic noise processes in the underlying hardware [10,11], leading to time-varying deviations of the LO frequency from that of the stable atomic reference...

  13. Exploitation of Semantic Building Model in Indoor Navigation Systems

    Science.gov (United States)

    Anjomshoaa, A.; Shayeganfar, F.; Tjoa, A. Min

    2009-04-01

    There are many types of indoor and outdoor navigation tools and methodologies available. A majority of these solutions are based on Global Positioning Systems (GPS) and instant video and image processing. These approaches are ideal for open world environments where very few information about the target location is available, but for large scale building environments such as hospitals, governmental offices, etc the end-user will need more detailed information about the surrounding context which is especially important in case of people with special needs. This paper presents a smart indoor navigation solution that is based on Semantic Web technologies and Building Information Model (BIM). The proposed solution is also aligned with Google Android's concepts to enlighten the realization of results. Keywords: IAI IFCXML, Building Information Model, Indoor Navigation, Semantic Web, Google Android, People with Special Needs 1 Introduction Built environment is a central factor in our daily life and a big portion of human life is spent inside buildings. Traditionally the buildings are documented using building maps and plans by utilization of IT tools such as computer-aided design (CAD) applications. Documenting the maps in an electronic way is already pervasive but CAD drawings do not suffice the requirements regarding effective building models that can be shared with other building-related applications such as indoor navigation systems. The navigation in built environment is not a new issue, however with the advances in emerging technologies like GPS, mobile and networked environments, and Semantic Web new solutions have been suggested to enrich the traditional building maps and convert them to smart information resources that can be reused in other applications and improve the interpretability with building inhabitants and building visitors. Other important issues that should be addressed in building navigation scenarios are location tagging and end-user communication

  14. Ambidextrous Organizations: A Multiple-Level Study of Absorptive Capacity, Exploratory and Exploitative Innovation and Performance

    NARCIS (Netherlands)

    J.J.P. Jansen (Justin)

    2005-01-01

    textabstractBalancing and synchronizing exploration and exploitation is fundamental to the competitive success of firms in dynamic environments. Despite the importance of reconciling exploration and exploitation within organizations, however, relatively little empirical research has examined this

  15. Models of Social Exploitation with Special Emphasis on Slovenc Traffic Economics

    Directory of Open Access Journals (Sweden)

    Iztok Ostan

    2005-01-01

    Full Text Available In order to decipher the organisational behaviour operatingin the transport sector of the economy it is necessary to discoverthe prevalent patterns of social exploitation at work. PreliminaJyresults of a study of experienced irregular traffic studentsshow that, according to them there is no significant differencein Slovenia between exploitation in traffic and other sectors.Thus, general models of exploitation could be used to explainthe behaviour in the traffic sector. Empirical research amongSlovene students showed that according to their statements inthe 90s the managerial and capitalistic types of exploitation prevailedin Slovenia over non-exploitative types of economic behaviour.It also showed that statements of students do not differmuch from those of the general public regarding this question,nor from the statements of irregular students with extensivework experience. It was also found that there were no substantialdifferences between the statements of Italian and Slovenestudents regarding the type of exploitation operative in theircountries. Students of traffic are basically of the same opinionregarding this topic as students in general, though slightly morecritical, especially towards business managers and politicians.

  16. Exploring, exploiting and evolving diversity of aquatic ecosystem models: a community perspective

    NARCIS (Netherlands)

    Janssen, A.B.G.; Gerla, D.J.

    2015-01-01

    Here, we present a community perspective on how to explore, exploit and evolve the diversity in aquatic ecosystem models. These models play an important role in understanding the functioning of aquatic ecosystems, filling in observation gaps and developing effective strategies for water quality

  17. Explore or exploit? A generic model and an exactly solvable case.

    Science.gov (United States)

    Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe

    2014-02-07

    Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.

  18. Explore or Exploit? A Generic Model and an Exactly Solvable Case

    Science.gov (United States)

    Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe

    2014-02-01

    Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.

  19. Models for solid oxide fuel cell systems exploitation of models hierarchy for industrial design of control and diagnosis strategies

    CERN Document Server

    Marra, Dario; Polverino, Pierpaolo; Sorrentino, Marco

    2016-01-01

    This book presents methodologies for optimal design of control and diagnosis strategies for Solid Oxide Fuel Cell systems. A key feature of the methodologies presented is the exploitation of modelling tools that balance accuracy and computational burden.

  20. Exploitation of geographic information system at mapping and modelling of selected soil parameters

    International Nuclear Information System (INIS)

    Palka, B.; Makovnikova, J.; Siran, M.

    2005-01-01

    In this presentation authors describe using of computers and geographic information systems (GIS) at effective use of soil fund, rational exploitation and organization of agricultural soil fund on the territory of the Slovak Republic, its monitoring and modelling. Using and creating of some geographically oriented information systems and databases about soils as well as present trends are discussed

  1. DEVELOPMENT OF RESERVOIR CHARACTERIZATION TECHNIQUES AND PRODUCTION MODELS FOR EXPLOITING NATURALLY FRACTURED RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Wiggins; Raymon L. Brown; Faruk Civan; Richard G. Hughes

    2002-12-31

    For many years, geoscientists and engineers have undertaken research to characterize naturally fractured reservoirs. Geoscientists have focused on understanding the process of fracturing and the subsequent measurement and description of fracture characteristics. Engineers have concentrated on the fluid flow behavior in the fracture-porous media system and the development of models to predict the hydrocarbon production from these complex systems. This research attempts to integrate these two complementary views to develop a quantitative reservoir characterization methodology and flow performance model for naturally fractured reservoirs. The research has focused on estimating naturally fractured reservoir properties from seismic data, predicting fracture characteristics from well logs, and developing a naturally fractured reservoir simulator. It is important to develop techniques that can be applied to estimate the important parameters in predicting the performance of naturally fractured reservoirs. This project proposes a method to relate seismic properties to the elastic compliance and permeability of the reservoir based upon a sugar cube model. In addition, methods are presented to use conventional well logs to estimate localized fracture information for reservoir characterization purposes. The ability to estimate fracture information from conventional well logs is very important in older wells where data are often limited. Finally, a desktop naturally fractured reservoir simulator has been developed for the purpose of predicting the performance of these complex reservoirs. The simulator incorporates vertical and horizontal wellbore models, methods to handle matrix to fracture fluid transfer, and fracture permeability tensors. This research project has developed methods to characterize and study the performance of naturally fractured reservoirs that integrate geoscience and engineering data. This is an important step in developing exploitation strategies for

  2. Exploiting conformational ensembles in modeling protein-protein interactions on the proteome scale

    Science.gov (United States)

    Kuzu, Guray; Gursoy, Attila; Nussinov, Ruth; Keskin, Ozlem

    2013-01-01

    Cellular functions are performed through protein-protein interactions; therefore, identification of these interactions is crucial for understanding biological processes. Recent studies suggest that knowledge-based approaches are more useful than ‘blind’ docking for modeling at large scales. However, a caveat of knowledge-based approaches is that they treat molecules as rigid structures. The Protein Data Bank (PDB) offers a wealth of conformations. Here, we exploited ensemble of the conformations in predictions by a knowledge-based method, PRISM. We tested ‘difficult’ cases in a docking-benchmark dataset, where the unbound and bound protein forms are structurally different. Considering alternative conformations for each protein, the percentage of successfully predicted interactions increased from ~26% to 66%, and 57% of the interactions were successfully predicted in an ‘unbiased’ scenario, in which data related to the bound forms were not utilized. If the appropriate conformation, or relevant template interface, is unavailable in the PDB, PRISM could not predict the interaction successfully. The pace of the growth of the PDB promises a rapid increase of ensemble conformations emphasizing the merit of such knowledge-based ensemble strategies for higher success rates in protein-protein interaction predictions on an interactome-scale. We constructed the structural network of ERK interacting proteins as a case study. PMID:23590674

  3. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Xiaodong Cai

    Full Text Available Integrating genetic perturbations with gene expression data not only improves accuracy of regulatory network topology inference, but also enables learning of causal regulatory relations between genes. Although a number of methods have been developed to integrate both types of data, the desiderata of efficient and powerful algorithms still remains. In this paper, sparse structural equation models (SEMs are employed to integrate both gene expression data and cis-expression quantitative trait loci (cis-eQTL, for modeling gene regulatory networks in accordance with biological evidence about genes regulating or being regulated by a small number of genes. A systematic inference method named sparsity-aware maximum likelihood (SML is developed for SEM estimation. Using simulated directed acyclic or cyclic networks, the SML performance is compared with that of two state-of-the-art algorithms: the adaptive Lasso (AL based scheme, and the QTL-directed dependency graph (QDG method. Computer simulations demonstrate that the novel SML algorithm offers significantly better performance than the AL-based and QDG algorithms across all sample sizes from 100 to 1,000, in terms of detection power and false discovery rate, in all the cases tested that include acyclic or cyclic networks of 10, 30 and 300 genes. The SML method is further applied to infer a network of 39 human genes that are related to the immune function and are chosen to have a reliable eQTL per gene. The resulting network consists of 9 genes and 13 edges. Most of the edges represent interactions reasonably expected from experimental evidence, while the remaining may just indicate the emergence of new interactions. The sparse SEM and efficient SML algorithm provide an effective means of exploiting both gene expression and perturbation data to infer gene regulatory networks. An open-source computer program implementing the SML algorithm is freely available upon request.

  4. Performance Production Analysis of Romanian Simmental Exploited at „P.F.A Munteanu Cornel” Farm

    Directory of Open Access Journals (Sweden)

    Alin Florin Avram

    2010-10-01

    Full Text Available Research follow to assess levels of productivity and the main indicators of milk of Romanian Simmental cows and its half blood, exploited in terms of milk production into Alba County environmental conditions. There were studied 45 cows from “P.F.A. Munteanu Cornel” farm. The results show that the maximum average of milk production is 5774 kg, registered in the third lactation and the average percentage of fat and protein is 3.92 respectively 3.34. The conclusion learned from the study is that Romanian Simmental cows studied have quantitative and qualitative productions over race standard and it is trying to reduce the period of exploitation and to intensify the process of milk production in first lactations.

  5. Modelling an exploited marine fish community with 15 parameters - results from a simple size-based model

    NARCIS (Netherlands)

    Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.

    2006-01-01

    To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a

  6. An exploratory model of girls' vulnerability to commercial sexual exploitation in prostitution.

    Science.gov (United States)

    Reid, Joan A

    2011-05-01

    Due to inaccessibility of child victims of commercial sexual exploitation, the majority of emergent research on the problem lacks theoretical framing or sufficient data for quantitative analysis. Drawing from Agnew's general strain theory, this study utilized structural equation modeling to explore: whether caregiver strain is linked to child maltreatment, if experiencing maltreatment is associated with risk-inflating behaviors or sexual denigration of self/others, and if these behavioral and psychosocial dysfunctions are related to vulnerability to commercial sexual exploitation. The proposed model was tested with data from 174 predominately African American women, 12% of whom indicated involvement in prostitution while a minor. Findings revealed child maltreatment worsened with increased caregiver strain. Experiencing child maltreatment was linked to running away, initiating substance use at earlier ages, and higher levels of sexual denigration of self/others. Sexual denigration of self/others was significantly related to the likelihood of prostitution as a minor. The network of variables in the model accounted for 34% of the variance in prostitution as a minor.

  7. Exploitation and Optimization of Reservoir Performance in Hunton Formation, Oklahoma, Budget Period I, Class Revisit

    Energy Technology Data Exchange (ETDEWEB)

    Kelkar, Mohan

    2002-04-02

    This report explains the unusual characteristics of West Carney Field based on detailed geological and engineering analyses. A geological history that explains the presence of mobile water and oil in the reservoir was proposed. The combination of matrix and fractures in the reservoir explains the reservoir?s flow behavior. We confirm our hypothesis by matching observed performance with a simulated model and develop procedures for correlating core data to log data so that the analysis can be extended to other, similar fields where the core coverage may be limited.

  8. Quantitative groundwater modelling for a sustainable water resource exploitation in a Mediterranean alluvial aquifer

    Science.gov (United States)

    Laïssaoui, Mounir; Mesbah, Mohamed; Madani, Khodir; Kiniouar, Hocine

    2018-05-01

    To analyze the water budget under human influences in the Isser wadi alluvial aquifer in the northeast of Algeria, we built a mathematical model which can be used for better managing groundwater exploitation. A modular three-dimensional finite-difference groundwater flow model (MODFLOW) was used. The modelling system is largely based on physical laws and employs a numerical method of the finite difference to simulate water movement and fluxes in a horizontally discretized field. After calibration in steady-state, the model could reproduce the initial heads with a rather good precision. It enabled us to quantify the aquifer water balance terms and to obtain a conductivity zones distribution. The model also highlighted the relevant role of the Isser wadi which constitutes a drain of great importance for the aquifer, ensuring alone almost all outflows. The scenarios suggested in transient simulations showed that an increase in the pumping would only increase the lowering of the groundwater levels and disrupting natural balance of aquifer. However, it is clear that this situation depends primarily on the position of pumping wells in the plain as well as on the extracted volumes of water. As proven by the promising results of model, this physically based and distributed-parameter model is a valuable contribution to the ever-advancing technology of hydrological modelling and water resources assessment.

  9. Modeling of information on the impact of mining exploitation on bridge objects in BIM

    Science.gov (United States)

    Bętkowski, Piotr

    2018-04-01

    The article discusses the advantages of BIM (Building Information Modeling) technology in the management of bridge infrastructure on mining areas. The article shows the problems with information flow in the case of bridge objects located on mining areas and the advantages of proper information management, e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities, cost reduction of damage removal and preventive actions, improvement of atmosphere for mining exploitation, improvement of the relationship between the manager of the bridge and the mine. Traditional model of managing bridge objects on mining areas has many disadvantages, which are discussed in this article. These disadvantages include among others: duplication of information about the object, lack of correlation in investments due to lack of information flow between bridge manager and mine, limited assessment possibilities of damage propagation on technical condition and construction resistance to mining influences.

  10. Estimation of Physical Layer Performance inWSNs Exploiting the Method of Indirect Observations

    Directory of Open Access Journals (Sweden)

    Luigi Atzori

    2012-11-01

    Full Text Available Wireless Sensor Networks (WSNs are used in many industrial and consumer applications that are increasingly gaining impact in our day to day lives. Still great efforts are needed towards the definition of methodologies for their effective management. One big issue is themonitoring of the network status, which requires the definition of the performance indicators and methodologies and should be accurate and not intrusive at the same time. In this paper, we present a new process for the monitoring of the physical layer in WSNs making use of a completely passive methodology. From data sniffed by external nodes, we first estimate the position of the nodes by applying the Weighted Least Squares (WLS to the method of indirect observations. The resulting node positions are then used to estimate the status of the communication links using the most appropriate propagation model. We performed a significant number of measurements on the field in both indoor and outdoor environments. From the experiments, we were able to achieve an accurate estimation of the channel links status with an average error lower than 1 dB, which is around 5 dB lower than the error introduced without the application of the proposed method.

  11. Inverse modeling and forecasting for the exploitation of the Pauzhetsky geothermal field, Kamchatka, Russia

    Energy Technology Data Exchange (ETDEWEB)

    Kiryukhin, Alexey V. [Institute of Volcanology and Seismology FEB RAS, Piip-9, P-Kamchatsky 683006 (Russian Federation); Asaulova, Natalia P. [Kamchatskburgeotemia Enterprise, Krasheninnikova-1, Thermalny, Kamchatka 684035 (Russian Federation); Finsterle, Stefan [Lawrence Berkeley National Laboratory, MS 90-1116, One Cyclotron Road, Berkeley, CA 94720 (United States)

    2008-10-15

    A three-dimensional numerical model of the Pauzhetsky geothermal field has been developed based on a conceptual hydrogeological model of the system. It extends over a 13.6-km{sup 2} area and includes three layers: (1) a base layer with inflow; (2) a geothermal reservoir; and (3) an upper layer with discharge and recharge/infiltration areas. Using the computer program iTOUGH2 [Finsterle, S., 2004. Multiphase inverse modeling: review and iTOUGH2 applications. Vadose Zone J. 3, 747-762], the model is calibrated to a total of 13,675 calibration points, combining natural-state and 1960-2006 exploitation data. The principal model parameters identified and estimated by inverse modeling include the fracture permeability and fracture porosity of the geothermal reservoir, the initial natural upflow rate, the base-layer porosity, and the permeabilities of the infiltration zones. Heat and mass balances derived from the calibrated model helped identify the sources of the geothermal reserves in the field. With the addition of five make-up wells, simulation forecasts for the 2007-2032 period predict a sustainable average steam production of 29 kg/s, which is sufficient to maintain the generation of 6.8 MWe at the Pauzhetsky power plant. (author)

  12. The Peace and Power Conceptual Model: An Assessment Guide for School Nurses Regarding Commercial Sexual Exploitation of Children.

    Science.gov (United States)

    Fraley, Hannah E; Aronowitz, Teri

    2017-10-01

    Human trafficking is a global problem; more than half of all victims are children. In the United States (US), at-risk youth continue to attend school. School nurses are on the frontlines, presenting a window of opportunity to identify and prevent exploitation. Available papers targeting school nurses report that school nurses may lack awareness of commercial sexual exploitation and may have attitudes and misperceptions about behaviors of school children at risk. This is a theoretical paper applying the Peace and Power Conceptual Model to understand the role of school nurses in commercial sexual exploitation of children.

  13. A simple interpretation of Hubbert's model of resource exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Bardi, U.; Lavacchi, A. [Dipartimento di Chimica, Universita di Firenze, Via della Lastruccia 3, Sesto Fiorentino FI (Italy); Bardi, U.; Lavacchi, A. [ASPO - Association for the Study of Peak Oil and Gas, Italian section, c/o Dipartimento di Chimica, Universita di Firenze, 50019 Sesto Fiorentino (Italy)

    2009-07-01

    The well known 'Hubbert curve' assumes that the production curve of a crude oil in a free market economy is 'bell shaped' and symmetric. The model was first applied in the 1950s as a way of forecasting the production of crude oil in the US lower 48 states. Today, variants of the model are often used for describing the worldwide production of crude oil, which is supposed to reach a global production peak ('peak oil') and to decline afterwards. The model has also been shown to be generally valid for mineral resources other than crude oil and also for slowly renewable biological resources such as whales. Despite its widespread use, Hubbert's model sometimes criticized for being arbitrary and its underlying assumptions are rarely examined. In the present work, we use a simple model to generate the bell shaped curve using the smallest possible number of assumptions, taking also into account the 'Energy Return to Energy Invested' (EROI or EROEI) parameter. We show that this model can reproduce several historical cases, even for resources other than crude oil, and provide a useful tool for understanding the general mechanisms of resource exploitation and the future of energy production in the world's economy. (author)

  14. Inverse modeling and forecasting for the exploitation of the Pauzhetsky geothermal field, Kamchatka, Russia

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, Stefan; Kiryukhin, A.V.; Asaulova, N.P.; Finsterle, S.

    2008-04-01

    A three-dimensional numerical model of the Pauzhetsky geothermal field has been developed based on a conceptual hydrogeological model of the system. It extends over a 13.6-km2 area and includes three layers: (1) a base layer with inflow; (2) a geothermal reservoir; and (3) an upper layer with discharge and recharge/infiltration areas. Using the computer program iTOUGH2 (Finsterle, 2004), the model is calibrated to a total of 13,675 calibration points, combining natural-state and 1960-2006 exploitation data. The principal model parameters identified and estimated by inverse modeling include the fracture permeability and fracture porosity of the geothermal reservoir, the initial natural upflow rate, the base-layer porosity, and the permeabilities of the infiltration zones. Heat and mass balances derived from the calibrated model helped identify the sources of the geothermal reserves in the field. With the addition of five makeup wells, simulation forecasts for the 2007-2032 period predict a sustainable average steam production of 29 kg/s, which is sufficient to maintain the generation of 6.8 MWe at the Pauzhetsky power plant.

  15. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    CERN Document Server

    Bonacorsi, D; Giordano, D; Girone, M; Neri, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This...

  16. Toward a synthetic economic systems modeling tool for sustainable exploitation of ecosystems.

    Science.gov (United States)

    Richardson, Colin; Courvisanos, Jerry; Crawford, John W

    2011-02-01

    Environmental resources that underpin the basic human needs of water, energy, and food are predicted to become in such short supply by 2050 that global security and the well-being of millions will be under threat. These natural commodities have been allowed to reach crisis levels of supply because of a failure of economic systems to sustain them. This is largely because there have been no means of integrating their exploitation into any economic model that effectively addresses ecological systemic failures in a way that provides an integrated ecological-economic tool that can monitor and evaluate market and policy targets. We review the reasons for this and recent attempts to address the problem while identifying outstanding issues. The key elements of a policy-oriented economic model that integrates ecosystem processes are described and form the basis of a proposed new synthesis approach. The approach is illustrated by an indicative case study that develops a simple model for rainfed and irrigated food production in the Murray-Darling basin of southeastern Australia. © 2011 New York Academy of Sciences.

  17. Exploiting multiple sources of information in learning an artificial language: human data and modeling.

    Science.gov (United States)

    Perruchet, Pierre; Tillmann, Barbara

    2010-03-01

    This study investigates the joint influences of three factors on the discovery of new word-like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word-likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word-like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different models of word segmentation to account for these results. PARSER (Perruchet & Vinter, 1998) is compared to the view that word segmentation relies on the exploitation of transitional probabilities between successive syllables, and with the models based on the Minimum Description Length principle, such as INCDROP. The authors submit arguments suggesting that PARSER has the advantage of accounting for the whole pattern of data without ad-hoc modifications, while relying exclusively on general-purpose learning principles. This study strengthens the growing notion that nonspecific cognitive processes, mainly based on associative learning and memory principles, are able to account for a larger part of early language acquisition than previously assumed. Copyright © 2009 Cognitive Science Society, Inc.

  18. Optical modelling of far-infrared astronomical instrumentation exploiting multimode horn antennas

    Science.gov (United States)

    O'Sullivan, Créidhe; Murphy, J. Anthony; Mc Auley, Ian; Wilson, Daniel; Gradziel, Marcin L.; Trappe, Neil; Cahill, Fiachra; Peacocke, T.; Savini, G.; Ganga, K.

    2014-07-01

    In this paper we describe the optical modelling of astronomical telescopes that exploit bolometric detectors fed by multimoded horn antennas. In cases where the horn shape is profiled rather than being a simple cone, we determine the beam at the horn aperture using an electromagnetic mode-matching technique. Bolometers, usually placed in an integrating cavity, can excite many hybrid modes in a corrugated horn; we usually assume they excite all modes equally. If the waveguide section feeding the horn is oversized these modes can propagate independently, thereby increasing the throughput of the system. We use an SVD analysis on the matrix that describes the scattering between waveguide (TE/TM) modes to recover the independent orthogonal fields (hybrid modes) and then propagate these to the sky independently where they are added in quadrature. Beam patterns at many frequencies across the band are then added with a weighting appropriate to the source spectrum. Here we describe simulations carried out on the highest-frequency (857-GHz) channel of the Planck HFI instrument. We concentrate in particular on the use of multimode feedhorns and consider the effects of possible manufacturing tolerances on the beam on the sky. We also investigate the feasibility of modelling far-out sidelobes across a wide band for electrically large structures and bolometers fed by multi-mode feedhorns. Our optical simulations are carried out using the industry-standard GRASP software package.

  19. Exploiting Surface Albedos Products to Bridge the Gap Between Remote Sensing Information and Climate Models

    Science.gov (United States)

    Pinty, Bernard; Andredakis, Ioannis; Clerici, Marco; Kaminski, Thomas; Taberner, Malcolm; Stephen, Plummer

    2011-01-01

    We present results from the application of an inversion method conducted using MODIS derived broadband visible and near-infrared surface albedo products. This contribution is an extension of earlier efforts to optimally retrieve land surface fluxes and associated two- stream model parameters based on the Joint Research Centre Two-stream Inversion Package (JRC-TIP). The discussion focuses on products (based on the mean and one-sigma values of the Probability Distribution Functions (PDFs)) obtained during the summer and winter and highlight specific issues related to snowy conditions. This paper discusses the retrieved model parameters including the effective Leaf Area Index (LAI), the background brightness and the scattering efficiency of the vegetation elements. The spatial and seasonal changes exhibited by these parameters agree with common knowledge and underscore the richness of the high quality surface albedo data sets. At the same time, the opportunity to generate global maps of new products, such as the background albedo, underscores the advantages of using state of the art algorithmic approaches capable of fully exploiting accurate satellite remote sensing datasets. The detailed analyses of the retrieval uncertainties highlight the central role and contribution of the LAI, the main process parameter to interpret radiation transfer observations over vegetated surfaces. The posterior covariance matrix of the uncertainties is further exploited to quantify the knowledge gain from the ingestion of MODIS surface albedo products. The estimation of the radiation fluxes that are absorbed, transmitted and scattered by the vegetation layer and its background is achieved on the basis of the retrieved PDFs of the model parameters. The propagation of uncertainties from the observations to the model parameters is achieved via the Hessian of the cost function and yields a covariance matrix of posterior parameter uncertainties. This matrix is propagated to the radiation

  20. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  1. Exploitation, Exploration or Continuous Innovation? Strategy: Focus, Fit and Performance in different business environments

    DEFF Research Database (Denmark)

    Gröessler, Andreas; Laugen, Bjørge Timenes; Lassen, Astrid Heidemann

    The purpose of this paper is to investigate the extent to which continuous innovation is pursued as a strategy for manufacturing firms in different types of competitive environments, and whether continuous innovation firms perform better than focused firms in certain environments. Statistical...... analyses are used of data collected from an international sample of manufacturing firms through the International Manufacturing Strategy Survey. The main findings are that, while focused as well as continuous innovation firms exist in all three types of business environments identified in this paper...

  2. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  3. Characterisation and exploitation of Atlas electromagnetic calorimeter performances: muons study and timing resolution use

    International Nuclear Information System (INIS)

    Camard, A.

    2004-10-01

    The ATLAS detector in LHC involves electromagnetic calorimeters. The purpose of this work is to study the calorimeter response to the muons contaminating the beam used to test the different modules of ATLAS. We have showed how data analysis from the testing beam can be used to assure that the required performance for the study of the detector response to muons provides a complementary diagnostic tool for electrons. We have taken part into the design of a testing bench aimed at assessing the performance of the receiver circuit for timing and triggering signals. We have developed, in the framework of a quick simulation of ATLAS, a tool for the reconstruction in a simple and fast manner of the localization of the main event vertex by using the measurement of the arrival time of particles with ATLAS's calorimeters. It is likely that this tool will be fully used during the starting phase of the ATLAS experiment because it is easier to operate it quickly and is less sensitive to the background noise than traditional tools based on charged-particle tracks recognition inside the detector

  4. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    International Nuclear Information System (INIS)

    Bonacorsi, D; Neri, M; Boccali, T; Giordano, D; Girone, M; Magini, N; Kuznetsov, V; Wildish, T

    2015-01-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  5. Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond

    Science.gov (United States)

    Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.

    2015-12-01

    During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for

  6. Lithium-ion battery performance improvement based on capacity recovery exploitation

    International Nuclear Information System (INIS)

    Eddahech, Akram; Briat, Olivier; Vinassa, Jean-Michel

    2013-01-01

    Highlights: •Experiments on combined power-cycling/calendar aging of high-power lithium battery. •Recovery phenomenon on battery capacity when we stop power-cycling. •Full discharge at rest time is a potential source for battery life prolongation. •Temperature impact on capacity recovery and battery aging. -- Abstract: In this work, the performance recovery phenomenon when aging high-power lithium-ion batteries used in HEV application is highlighted. This phenomenon consists in the increase on the battery capacity when power-cycling is stopped. The dependency of this phenomenon on the stop-SOC value is demonstrated. Keeping battery at a fully discharged state preserves a large amount of charge from the SEI-electrolyte interaction when they are in the positive electrode during rest time. Results from power cycling and combined aging, calendar/power-cycling, of a 12 A h-commercialized lithium-ion battery, at two temperatures (45 °C and 55 °C), are presented and obtained results are discussed

  7. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  8. Exploiting the atmosphere's memory for monthly, seasonal and interannual temperature forecasting using Scaling LInear Macroweather Model (SLIMM)

    Science.gov (United States)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2016-04-01

    . The corresponding space-time model (the ScaLIng Macroweather Model (SLIMM) is thus only multifractal in space where the spatial intermittency is associated with different climate zones. SLIMM exploits the power law (scaling) behavior in time of the temperature field and uses the long historical memory of the temperature series to improve the skill. The only model parameter is the fluctuation scaling exponent, H (usually in the range -0.5 - 0), which is directly related to the skill and can be obtained from the data. The results predicted analytically by the model have been tested by performing actual hindcasts in different 5° x 5° regions covering the planet using ERA-Interim, 20CRv2 and NCEP/NCAR reanalysis as reference datasets. We report maps of theoretical skill predicted by the model and we compare it with actual skill based on hindcasts for monthly, seasonal and annual resolutions. We also present maps of calibrated probability hindcasts with their respective validations. Comparisons between our results using SLIMM, some other stochastic autoregressive model, and hindcasts from the Canadian Seasonal to Interannual Prediction System (CanSIPS) and the National Centers for Environmental Prediction (NCEP)'s model CFSv2, are also shown. For seasonal temperature forecasts, SLIMM outperforms the GCM based forecasts in over 90% of the earth's surface. SLIMM forecasts can be accessed online through the site: http://www.to_be_announced.mcgill.ca.

  9. Ion thruster performance model

    International Nuclear Information System (INIS)

    Brophy, J.R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr, and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature

  10. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  11. Performance modeling of Beamlet

    International Nuclear Information System (INIS)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-01-01

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions

  12. Balancing Information Analysis and Decision Value: A Model to Exploit the Decision Process

    Science.gov (United States)

    2011-12-01

    technical intelli- gence e.g. signals and sensors (SIGINT and MASINT), imagery (!MINT), as well and human and open source intelligence (HUMINT and OSINT ...Clark 2006). The ability to capture large amounts of da- ta and the plenitude of modem intelligence information sources provides a rich cache of...many tech- niques for managing information collected and derived from these sources , the exploitation of intelligence assets for decision-making

  13. Redefining Exploitation

    DEFF Research Database (Denmark)

    Agarwala, Rina

    2016-01-01

    This article examines how self-employed workers are organizing in the garments and waste collection industries in India. Although the question of who is profiting from self-employed workers’ labor is complex, the cases outlined in this paper highlight telling instances of how some self......-employed workers are organizing as workers. They are fighting labor exploitation by redefining the concept to include additional exploitation axes (from the state and middle class) and forms (including sexual). In doing so, they are redefining potential solutions, including identities and material benefits, to fit...... their unique needs. By expanding the category of “workers” beyond those defined by a narrow focus on a standard employer-employee relationship, these movements are also fighting exclusion from earlier labor protections by increasing the number of entitled beneficiaries. These struggles provide an important...

  14. Exploration and Exploitation within Firms : The Impact of CEOs' Cognitive Style on Incremental and Radical Innovation Performance

    NARCIS (Netherlands)

    de Visser, Matthias; Faems, Dries

    Previous studies have provided valuable insights into how environmental and organizational factors may influence levels of explorative and exploitative innovation in firms. At the same time, scholars suggest that individual characteristics, such as cognitive and behavioural inclinations of top

  15. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  16. Modeling a hierarchical structure of factors influencing exploitation policy for water distribution systems using ISM approach

    Science.gov (United States)

    Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta

    2017-12-01

    Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.

  17. Exploiting magnetic resonance angiography imaging improves model estimation of BOLD signal.

    Directory of Open Access Journals (Sweden)

    Zhenghui Hu

    Full Text Available The change of BOLD signal relies heavily upon the resting blood volume fraction ([Formula: see text] associated with regional vasculature. However, existing hemodynamic data assimilation studies pretermit such concern. They simply assign the value in a physiologically plausible range to get over ill-conditioning of the assimilation problem and fail to explore actual [Formula: see text]. Such performance might lead to unreliable model estimation. In this work, we present the first exploration of the influence of [Formula: see text] on fMRI data assimilation, where actual [Formula: see text] within a given cortical area was calibrated by an MR angiography experiment and then was augmented into the assimilation scheme. We have investigated the impact of [Formula: see text] on single-region data assimilation and multi-region data assimilation (dynamic cause modeling, DCM in a classical flashing checkerboard experiment. Results show that the employment of an assumed [Formula: see text] in fMRI data assimilation is only suitable for fMRI signal reconstruction and activation detection grounded on this signal, and not suitable for estimation of unobserved states and effective connectivity study. We thereby argue that introducing physically realistic [Formula: see text] in the assimilation process may provide more reliable estimation of physiological information, which contributes to a better understanding of the underlying hemodynamic processes. Such an effort is valuable and should be well appreciated.

  18. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  19. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  20. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger.

    Science.gov (United States)

    Wright, James C; Sugden, Deana; Francis-McIntyre, Sue; Riba-Garcia, Isabel; Gaskell, Simon J; Grigoriev, Igor V; Baker, Scott E; Beynon, Robert J; Hubbard, Simon J

    2009-02-04

    Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI) and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS) were acquired from 1d gel electrophoresis bands and searched against all available gene models using Average Peptide Scoring (APS) and reverse database searching to produce confident identifications at an acceptable false discovery rate (FDR). 405 identified peptide sequences were mapped to 214 different A.niger genomic loci to which 4093 predicted gene models clustered, 2872 of which contained the mapped peptides. Interestingly, 13 (6%) of these loci either had no preferred predicted gene model or the genome annotators' chosen "best" model for that genomic locus was not found to be the most parsimonious match to the identified peptides. The peptides identified also boosted confidence in predicted gene structures spanning 54 introns from different gene models. This work highlights the potential of integrating experimental proteomics data into genomic annotation pipelines much as expressed sequence tag (EST) data has been. A comparison of the published genome from another strain of A.niger sequenced by DSM showed that a number of the gene models or proteins with proteomics evidence did not occur in both genomes, further highlighting the utility of the method.

  1. Exploiting the Expressiveness of Cyclo-Static Dataflow to Model Multimedia Implementations

    Directory of Open Access Journals (Sweden)

    Henk Corporaal

    2007-01-01

    Full Text Available The design of increasingly complex and concurrent multimedia systems requires a description at a higher abstraction level. Using an appropriate model of computation helps to reason about the system and enables design time analysis methods. The nature of multimedia processing matches in many cases well with cyclo-static dataflow (CSDF, making it a suitable model. However, channels in an implementation often use for cost reasons a kind of shared buffer that cannot be directly described in CSDF. This paper shows how such implementation specific aspects can be expressed in CSDF without the need for extensions. Consequently, the CSDF graph remains completely analyzable and allows reasoning about its temporal behavior. The obtained relation between model and implementation enables a buffer capacity analysis on the model while assuring the throughput of the final implementation. The capabilities of the approach are demonstrated by analyzing the temporal behavior of an MPEG-4 video encoder with a CSDF graph.

  2. A Multiple-Iterated Dual Control Model for Groundwater Exploitation and Water Level Based on the Optimal Allocation Model of Water Resources

    Directory of Open Access Journals (Sweden)

    Junqiu Liu

    2018-04-01

    Full Text Available In order to mitigate environmental and ecological impacts resulting from groundwater overexploitation, we developed a multiple-iterated dual control model consisting of four modules for groundwater exploitation and water level. First, a water resources allocation model integrating calculation module of groundwater allowable withdrawal was built to predict future groundwater recharge and discharge. Then, the results were input into groundwater numerical model to simulate water levels. Groundwater exploitation was continuously optimized using the critical groundwater level as the feedback, and a groundwater multiple-iterated technique was applied to the feedback process. The proposed model was successfully applied to a typical region in Shenyang in northeast China. Results showed the groundwater numerical model was verified in simulating water levels, with a mean absolute error of 0.44 m, an average relative error of 1.33%, and a root-mean-square error of 0.46 m. The groundwater exploitation reduced from 290.33 million m3 to 116.76 million m3 and the average water level recovered from 34.27 m to 34.72 m in planning year. Finally, we proposed the strategies for water resources management in which the water levels should be controlled within the critical groundwater level. The developed model provides a promising approach for water resources allocation and sustainable groundwater management, especially for those regions with overexploited groundwater.

  3. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger

    Directory of Open Access Journals (Sweden)

    Grigoriev Igor V

    2009-02-01

    Full Text Available Abstract Background Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS were acquired from 1d gel electrophoresis bands and searched against all available gene models using Average Peptide Scoring (APS and reverse database searching to produce confident identifications at an acceptable false discovery rate (FDR. Results 405 identified peptide sequences were mapped to 214 different A.niger genomic loci to which 4093 predicted gene models clustered, 2872 of which contained the mapped peptides. Interestingly, 13 (6% of these loci either had no preferred predicted gene model or the genome annotators' chosen "best" model for that genomic locus was not found to be the most parsimonious match to the identified peptides. The peptides identified also boosted confidence in predicted gene structures spanning 54 introns from different gene models. Conclusion This work highlights the potential of integrating experimental proteomics data into genomic annotation pipelines much as expressed sequence tag (EST data has been. A comparison of the published genome from another strain of A.niger sequenced by DSM showed that a number of the gene models or proteins with proteomics evidence did not occur in both genomes, further highlighting the utility of the method.

  4. Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.

    Science.gov (United States)

    Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L

    2012-01-01

    In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.

  5. Exploiting the flexibility of a family of models for taxation and redistribution

    Science.gov (United States)

    Bertotti, M. L.; Modanese, G.

    2012-08-01

    We discuss a family of models expressed by nonlinear differential equation systems describing closed market societies in the presence of taxation and redistribution. We focus in particular on three example models obtained in correspondence to different parameter choices. We analyse the influence of the various choices on the long time shape of the income distribution. Several simulations suggest that behavioral heterogeneity among the individuals plays a definite role in the formation of fat tails of the asymptotic stationary distributions. This is in agreement with results found with different approaches and techniques. We also show that an excellent fit for the computational outputs of our models is provided by the κ-generalized distribution introduced by Kaniadakis in [Physica A 296, 405 (2001)].

  6. Bacteria, Yeast, Worms, and Flies: Exploiting Simple Model Organisms to Investigate Human Mitochondrial Diseases

    Science.gov (United States)

    Rea, Shane L.; Graham, Brett H.; Nakamaru-Ogiso, Eiko; Kar, Adwitiya; Falk, Marni J.

    2010-01-01

    The extensive conservation of mitochondrial structure, composition, and function across evolution offers a unique opportunity to expand our understanding of human mitochondrial biology and disease. By investigating the biology of much simpler model organisms, it is often possible to answer questions that are unreachable at the clinical level.…

  7. Technology learning in a small open economy-The systems, modelling and exploiting the learning effect

    International Nuclear Information System (INIS)

    Martinsen, Thomas

    2011-01-01

    This paper reviews the characteristics of technology learning and discusses its application in energy system modelling in a global-local perspective. Its influence on the national energy system, exemplified by Norway, is investigated using a global and national Markal model. The dynamic nature of the learning system boundary and coupling between the national energy system and the global development and manufacturing system is elaborated. Some criteria important for modelling of spillover are suggested. Particularly, to ensure balance in global energy demand and supply and accurately reflect alternative global pathways spillover for all technologies as well as energy carrier cost/prices should be estimated under the same global scenario. The technology composition, CO 2 emissions and system cost in Norway up to 2050 exhibit sensitivity to spillover. Moreover, spillover may reduce both CO 2 emissions and total system cost. National energy system analysis of low carbon society should therefore consider technology development paths in global policy scenarios. Without the spillover from international deployment a domestic technology relies only on endogenous national learning. However, with high but realistic learning rates offshore floating wind may become cost-efficient even if initially deployed only in Norwegian niche markets. - Research highlights: → Spillover for all technologies should emanate from the same global scenario. → A global model is called for to estimate spillover.→ Spillover may reduce CO 2 emissions and the total system cost in a small open economy. → Off-shore floating wind may become cost-efficient in a national niche market.

  8. CSDFa: a model for exploiting the trade-off between data and pipeline parallelism

    NARCIS (Netherlands)

    Koek, Peter; Geuns, S.J.; Hausmans, J.P.H.M.; Corporaal, Henk; Bekooij, Marco Jan Gerrit

    2016-01-01

    Real-time stream processing applications, such as SDR applications, are often executed concurrently on multiprocessor systems. A unified data flow model and analysis method have been proposed that can be used to simultaneously determine the amount of pipeline and coarse-grained data parallelism

  9. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  10. Exploitation of geoinformatics at modelling of functional effects of forest functions

    International Nuclear Information System (INIS)

    Sitko, R.

    2005-01-01

    From point of view of space modelling geoinformatics has wide application in group of ecologic function of forest because they directly depend on natural conditions of site. A causa de cy modelling application was realised on the territory of TANAP (Tatras National Park), West Tatras, in the part Liptovske Kopy. The size of this territory is about 4,900 hectares and forests there subserve the first of all significant ecological functions, what are soil protection from erosion, water management, and anti-avalanche function. Of environmental functions they have recreational role of the forest and function of nature protection. Anti-avalanche and anti-erosion function of forest is evaluated in this presentation

  11. Subsidence Modeling of the Over-exploited Granular Aquifer System in Aguascalientes, Mexico

    Science.gov (United States)

    Solano Rojas, D. E.; Pacheco, J.; Wdowinski, S.; Minderhoud, P. S. J.; Cabral-Cano, E.; Albino, F.

    2017-12-01

    The valley of Aguascalientes in central Mexico experiences subsidence rates of up to 100 [mm/yr] due to overexploitation of its aquifer system, as revealed from satellite-based geodetic observations. The spatial pattern of the subsidence over the valley is inhomogeneous and affected by shallow faulting. The understanding of the subsoil mechanics is still limited. A better understanding of the subsidence process in Aguascalientes is needed to provide insights for future subsidence in the valley. We present here a displacement-constrained finite-element subsidence model, based on the USGS MODFLOW software. The construction of our model relies on 3 main inputs: (1) groundwater level time series obtained from extraction wells' hydrographs, (2) subsurface lithostratigraphy interpreted from well drilling logs, and (3) hydrogeological parameters obtained from field pumping tests. The groundwater level measurements were converted to pore pressure in our model's layers, and used in Terzaghi's equation for calculating effective stress. We then used the effective stress along with the displacement obtained from geodetic observations to constrain and optimize five geo-mechanical parameters: compression ratio, reloading ratio, secondary compression index, over consolidation ratio, and consolidation coefficient. Finally, we use the NEN-Bjerrum linear stress model formulation for settlements to determine elastic and visco-plastic strain, accounting for the aquifer system units' aging effect. Preliminary results show higher compaction response in clay-saturated intervals (i.e. aquitards) of the aquifer system, as reflected in the spatial pattern of the surface deformation. The forecasted subsidence for our proposed scenarios show a much more pronounced deformation when we consider higher groundwater extraction regimes.

  12. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger

    OpenAIRE

    Wright, James C.; Sugden, Deana; Francis-McIntyre, Sue; Riba Garcia, Isabel; Gaskell, Simon J.; Grigoriev, Igor V.; Baker, Scott E.; Beynon, Robert J.; Hubbard, Simon J.

    2009-01-01

    Abstract Background Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI) and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS) were ac...

  13. Sustaining Economic Exploitation of Complex Ecosystems in Computational Models of Coupled Human-Natural Networks

    OpenAIRE

    Martinez, Neo D.; Tonin, Perrine; Bauer, Barbara; Rael, Rosalyn C.; Singh, Rahul; Yoon, Sangyuk; Yoon, Ilmi; Dunne, Jennifer A.

    2012-01-01

    Understanding ecological complexity has stymied scientists for decades. Recent elucidation of the famously coined "devious strategies for stability in enduring natural systems" has opened up a new field of computational analyses of complex ecological networks where the nonlinear dynamics of many interacting species can be more realistically mod-eled and understood. Here, we describe the first extension of this field to include coupled human-natural systems. This extension elucidates new strat...

  14. Animal Models in Forensic Science Research: Justified Use or Ethical Exploitation?

    Science.gov (United States)

    Mole, Calvin Gerald; Heyns, Marise

    2018-05-01

    A moral dilemma exists in biomedical research relating to the use of animal or human tissue when conducting scientific research. In human ethics, researchers need to justify why the use of humans is necessary should suitable models exist. Conversely, in animal ethics, a researcher must justify why research cannot be carried out on suitable alternatives. In the case of medical procedures or therapeutics testing, the use of animal models is often justified. However, in forensic research, the justification may be less evident, particularly when research involves the infliction of trauma on living animals. To determine how the forensic science community is dealing with this dilemma, a review of literature within major forensic science journals was conducted. The frequency and trends of the use of animals in forensic science research was investigated for the period 1 January 2012-31 December 2016. The review revealed 204 original articles utilizing 5050 animals in various forms as analogues for human tissue. The most common specimens utilized were various species of rats (35.3%), pigs (29.3%), mice (17.7%), and rabbits (8.2%) although different specimens were favored in different study themes. The majority of studies (58%) were conducted on post-mortem specimens. It is, however, evident that more needs to be done to uphold the basic ethical principles of reduction, refinement and replacement in the use of animals for research purposes.

  15. PBDE exposure from food in Ireland: optimising data exploitation in probabilistic exposure modelling.

    Science.gov (United States)

    Trudel, David; Tlustos, Christina; Von Goetz, Natalie; Scheringer, Martin; Hungerbühler, Konrad

    2011-01-01

    Polybrominated diphenyl ethers (PBDEs) are a class of brominated flame retardants added to plastics, polyurethane foam, electronics, textiles, and other products. These products release PBDEs into the indoor and outdoor environment, thus causing human exposure through food and dust. This study models PBDE dose distributions from ingestion of food for Irish adults on congener basis by using two probabilistic and one semi-deterministic method. One of the probabilistic methods was newly developed and is based on summary statistics of food consumption combined with a model generating realistic daily energy supply from food. Median (intermediate) doses of total PBDEs are in the range of 0.4-0.6 ng/kg(bw)/day for Irish adults. The 97.5th percentiles of total PBDE doses lie in a range of 1.7-2.2 ng/kg(bw)/day, which is comparable to doses derived for Belgian and Dutch adults. BDE-47 and BDE-99 were identified as the congeners contributing most to estimated intakes, accounting for more than half of the total doses. The most influential food groups contributing to this intake are lean fish and salmon which together account for about 22-25% of the total doses.

  16. Profits and Exploitation: A Reappraisal

    OpenAIRE

    Yoshihara, Naoki; Veneziani, Roberto

    2011-01-01

    This paper provides a mathematical analysis of the Marxian theory of the exploitation of labour in general equilibrium models. The two main definitions of Marxian exploitation in the literature, proposed by Morishima (1974) and Roemer (1982), respectively, are analysed in the context of general convex economies. It is shown that, contrary to the received view, in general these definitions do not preserve the so-called Fundamental Marxian Theorem (FMT), which states that the exploitation of la...

  17. Exploiting Orbital Data and Observation Campaigns to Improve Space Debris Models

    Science.gov (United States)

    Braun, V.; Horstmann, A.; Reihs, B.; Lemmens, S.; Merz, K.; Krag, H.

    The European Space Agency (ESA) has been developing the Meteoroid and Space Debris Terrestrial Environment Reference (MASTER) software as the European reference model for space debris for more than 25 years. It is an event-based simulation of all known individual debris-generating events since 1957, including breakups, solid rocket motor firings and nuclear reactor core ejections. In 2014, the upgraded Debris Risk Assessment and Mitigation Analysis (DRAMA) tool suite was released. In the same year an ESA instruction made the standard ISO 24113:2011 on space debris mitigation requirements, adopted via the European Cooperation for Space Standardization (ECSS), applicable to all ESA missions. In order to verify the compliance of a space mission with those requirements, the DRAMA software is used to assess collision avoidance statistics, estimate the remaining orbital lifetime and evaluate the on-ground risk for controlled and uncontrolled reentries. In this paper, the approach to validate the MASTER and DRAMA tools is outlined. For objects larger than 1 cm, thus potentially being observable from ground, the MASTER model has been validated through dedicated observation campaigns. Recent campaign results shall be discussed. Moreover, catalogue data from the Space Surveillance Network (SSN) has been used to correlate the larger objects. In DRAMA, the assessment of collision avoidance statistics is based on orbit uncertainty information derived from Conjunction Data Messages (CDM) provided by the Joint Space Operations Center (JSpOC). They were collected for more than 20 ESA spacecraft in the recent years. The way this information is going to be used in a future DRAMA version is outlined and the comparison of estimated manoeuvre rates with real manoeuvres from the operations of ESA spacecraft is shown.

  18. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  19. A mixed integer linear programming model for integrating thermodynamic cycles for waste heat exploitation in process sites

    International Nuclear Information System (INIS)

    Oluleye, Gbemi; Smith, Robin

    2016-01-01

    Highlights: • MILP model developed for integration of waste heat recovery technologies in process sites. • Five thermodynamic cycles considered for exploitation of industrial waste heat. • Temperature and quantity of multiple waste heat sources considered. • Interactions with the site utility system considered. • Industrial case study presented to illustrate application of the proposed methodology. - Abstract: Thermodynamic cycles such as organic Rankine cycles, absorption chillers, absorption heat pumps, absorption heat transformers, and mechanical heat pumps are able to utilize wasted thermal energy in process sites for the generation of electrical power, chilling and heat at a higher temperature. In this work, a novel systematic framework is presented for optimal integration of these technologies in process sites. The framework is also used to assess the best design approach for integrating waste heat recovery technologies in process sites, i.e. stand-alone integration or a systems-oriented integration. The developed framework allows for: (1) selection of one or more waste heat sources (taking into account the temperatures and thermal energy content), (2) selection of one or more technology options and working fluids, (3) selection of end-uses of recovered energy, (4) exploitation of interactions with the existing site utility system and (5) the potential for heat recovery via heat exchange is also explored. The methodology is applied to an industrial case study. Results indicate a systems-oriented design approach reduces waste heat by 24%; fuel consumption by 54% and CO_2 emissions by 53% with a 2 year payback, and stand-alone design approach reduces waste heat by 12%; fuel consumption by 29% and CO_2 emissions by 20.5% with a 4 year payback. Therefore, benefits from waste heat utilization increase when interactions between the existing site utility system and the waste heat recovery technologies are explored simultaneously. The case study also shows

  20. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  1. A Novel Approach to Model Earth Fissure Caused by Extensive Aquifer Exploitation and its Application to the Wuxi Case, China

    Science.gov (United States)

    Ye, Shujun; Franceschini, Andrea; Zhang, Yan; Janna, Carlo; Gong, Xulong; Yu, Jun; Teatini, Pietro

    2018-03-01

    Initially observed in the semiarid basins of southwestern USA, earth fissures due to aquifer over-exploitation are presently threatening a large number of subsiding basins in various countries worldwide. Different mechanics have been proposed to explain this process, such as differential compaction, horizontal movements, and fault reactivation. Numerical modeling and prediction of this major geohazard caused by overuse of groundwater resources are challenging because of two main requirements: shifting from the classical continuous to discontinuous geomechanics and incorporating two-dimensional features (the earth fissures) into large three-dimensional (3-D) modeling domain (the subsiding basin). In this work, we proposed a novel modeling approach to simulate earth fissure generation and propagation in 3-D complex geological settings. A nested two-scale approach associated with an original nonlinear elastoplastic finite element/interface element simulator allows modeling the mechanics of earth discontinuities, in terms of both sliding and opening. The model is applied on a case study in Wuxi, China, where groundwater pumping between 1985 and 2004 has caused land subsidence larger than 2 m. The model outcomes highlight that the presence of a shallow (˜80 m deep) bedrock ridge crossing the Yangtze River delta is the key factor triggering the earth fissure development in this area. Bending of the alluvial deposits around the ridge tip and shear stress due to the uneven piezometric change and asymmetrical shape of the bedrock have caused the earth fissure to onset at the land surface and propagate downward to a maximum depth of about 20-30 m. Maximum sliding and opening are computed in the range of 10-40 cm, in agreement with the order of magnitude estimated in the field.

  2. AAV exploits subcellular stress associated with inflammation, endoplasmic reticulum expansion, and misfolded proteins in models of cystic fibrosis.

    Directory of Open Access Journals (Sweden)

    Jarrod S Johnson

    2011-05-01

    Full Text Available Barriers to infection act at multiple levels to prevent viruses, bacteria, and parasites from commandeering host cells for their own purposes. An intriguing hypothesis is that if a cell experiences stress, such as that elicited by inflammation, endoplasmic reticulum (ER expansion, or misfolded proteins, then subcellular barriers will be less effective at preventing viral infection. Here we have used models of cystic fibrosis (CF to test whether subcellular stress increases susceptibility to adeno-associated virus (AAV infection. In human airway epithelium cultured at an air/liquid interface, physiological conditions of subcellular stress and ER expansion were mimicked using supernatant from mucopurulent material derived from CF lungs. Using this inflammatory stimulus to recapitulate stress found in diseased airways, we demonstrated that AAV infection was significantly enhanced. Since over 90% of CF cases are associated with a misfolded variant of Cystic Fibrosis Transmembrane Conductance Regulator (ΔF508-CFTR, we then explored whether the presence of misfolded proteins could independently increase susceptibility to AAV infection. In these models, AAV was an order of magnitude more efficient at transducing cells expressing ΔF508-CFTR than in cells expressing wild-type CFTR. Rescue of misfolded ΔF508-CFTR under low temperature conditions restored viral transduction efficiency to that demonstrated in controls, suggesting effects related to protein misfolding were responsible for increasing susceptibility to infection. By testing other CFTR mutants, G551D, D572N, and 1410X, we have shown this phenomenon is common to other misfolded proteins and not related to loss of CFTR activity. The presence of misfolded proteins did not affect cell surface attachment of virus or influence expression levels from promoter transgene cassettes in plasmid transfection studies, indicating exploitation occurs at the level of virion trafficking or processing. Thus

  3. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    Directory of Open Access Journals (Sweden)

    Yongkai An

    2015-07-01

    Full Text Available This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately.

  4. Using pMOS Pass-Gates to Boost SRAM Performance by Exploiting Strain Effects in Sub-20-nm FinFET Technologies

    OpenAIRE

    Royer del Barrio, Pablo; López Vallejo, Marisa

    2014-01-01

    Strained fin is one of the techniques used to improve the devices as their size keeps reducing in new nanoscale nodes. In this paper, we use a predictive technology of 14 nm where pMOS mobility is significantly improved when those devices are built on top of long, uncut fins, while nMOS devices present the opposite behavior due to the combination of strains. We explore the possibility of boosting circuit performance in repetitive structures where long uncut fins can be exploited to increase f...

  5. 2D Modelling of the Gorkha earthquake through the joint exploitation of Sentinel 1-A DInSAR measurements and geological, structural and seismological information

    Science.gov (United States)

    De Novellis, Vincenzo; Castaldo, Raffaele; Solaro, Giuseppe; De Luca, Claudio; Pepe, Susi; Bonano, Manuela; Casu, Francesco; Zinno, Ivana; Manunta, Michele; Lanari, Riccardo; Tizzani, Pietro

    2016-04-01

    A Mw 7.8 earthquake struck Nepal on 25 April 2015 at 06:11:26 UTC, killing more than 9,000 people, injuring more than 23,000 and producing extensive damages. The main seismic event, known as the Gorkha earthquake, had its epicenter localized at ~82 km NW of the Kathmandu city and the hypocenter at a depth of approximately 15 km. After the main shock event, about 100 aftershocks occurred during the following months, propagating toward the south-east direction; in particular, the most energetic shocks were the Mw 6.7 and Mw 7.3 occurred on 26 April and 12 May, respectively. In this study, we model the causative fault of the earthquake by jointly exploiting surface deformation retrieved by the DInSAR measurements collected through the Sentinel 1-A (S1A) space-borne sensor and the available geological, structural and seismological information. We first exploit the analytical solution performing a back-analysis of the ground deformation detected by the first co-seismic S1A interferogram, computed by exploiting the 17/04/2015 and 29/04/2015 SAR acquisitions and encompassing the main earthquake and some aftershocks, to search for the location and geometry of the fault plane. Starting from these findings and by benefiting from the available geological, structural and seismological data, we carry out a Finite Element (FE)-based 2D modelling of the causative fault, in order to evaluate the impact of the geological structures activated during the seismic event on the distribution of the ground deformation field. The obtained results show that the causative fault has a rather complex compressive structure, dipping northward, formed by segments with different dip angles: 6° the deep segment and 60° the shallower one. Therefore, although the hypocenters of the main shock and most of the more energetic aftershocks are located along the deeper plane, corresponding to a segment of the Main Himalayan Thrust (MHT), the FE solution also indicates the contribution of the shallower

  6. Characterisation and exploitation of Atlas electromagnetic calorimeter performances: muons study and timing resolution use; Caracterisation et exploitation des performances du calorimetre electromagneique d'Atlas: etude des muons et mise a profit de la resolution en temps

    Energy Technology Data Exchange (ETDEWEB)

    Camard, A

    2004-10-01

    The ATLAS detector in LHC involves electromagnetic calorimeters. The purpose of this work is to study the calorimeter response to the muons contaminating the beam used to test the different modules of ATLAS. We have showed how data analysis from the testing beam can be used to assure that the required performance for the study of the detector response to muons provides a complementary diagnostic tool for electrons. We have taken part into the design of a testing bench aimed at assessing the performance of the receiver circuit for timing and triggering signals. We have developed, in the framework of a quick simulation of ATLAS, a tool for the reconstruction in a simple and fast manner of the localization of the main event vertex by using the measurement of the arrival time of particles with ATLAS's calorimeters. It is likely that this tool will be fully used during the starting phase of the ATLAS experiment because it is easier to operate it quickly and is less sensitive to the background noise than traditional tools based on charged-particle tracks recognition inside the detector.

  7. Performance Evaluation of a SOA-based Rack-To-Rack Switch for Optical Interconnects Exploiting NRZ-DPSK

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Prince, Kamau

    2012-01-01

    We experimentally study the transmission performance of 10-Gb/s NRZ-DPSK through concatenated AWG MUX/DMUXs and SOAs employed in an optimized 64×64 optical supercomputer interconnect architecture. NRZ-DPSK offers 9-dB higher dynamic range compared to conventional IM/DD....

  8. Advanced Three-Dimensional Finite Element Modeling of a Slow Landslide through the Exploitation of DInSAR Measurements and in Situ Surveys

    Directory of Open Access Journals (Sweden)

    Vincenzo De Novellis

    2016-08-01

    Full Text Available In this paper, we propose an advanced methodology to perform three-dimensional (3D Finite Element (FE modeling to investigate the kinematical evolution of a slow landslide phenomenon. Our approach benefits from the effective integration of the available geological, geotechnical and satellite datasets to perform an accurate simulation of the landslide process. More specifically, we fully exploit the capability of the advanced Differential Synthetic Aperture Radar Interferometry (DInSAR technique referred to as the Small BAseline Subset (SBAS approach to provide spatially dense surface displacement information. Subsequently, we analyze the physical behavior characterizing the observed landslide phenomenon by means of an inverse analysis based on an optimization procedure. We focus on the Ivancich landslide phenomenon, which affects a residential area outside the historical center of the town of Assisi (Central Italy. Thanks to the large amount of available information, we have selected this area as a representative case study highlighting the capability of advanced 3D FE modeling to perform effective risk analyses of slow landslide processes and accurate urban development planning. In particular, the FE modeling is constrained by using the data from 7 litho-stratigraphic cross-sections and 62 stratigraphic boreholes; and the optimization procedure is carried out using the SBAS-DInSAR retrieved results by processing 39 SAR images collected by the Cosmo-SkyMed (CSK constellation in the 2009–2012 time span. The achieved results allow us to explore the spatial and temporal evolution of the slow-moving phenomenon and via comparison with the geomorphological data, to derive a synoptic view of the kinematical activity of the urban area affected by the Ivancich landslide.

  9. Optimizing cooling tower performance refrigeration systems, chemical plants, and power plants all have a resource quietly awaiting exploitation - cold water

    International Nuclear Information System (INIS)

    Burger, R.

    1993-01-01

    The cooling towers are hidden bonanzas for energy conservation and dollar savings when properly engineered and maintained. In many cases, the limiting factor of production is the quality and quantity of cold water coming off the cooling tower. The savings accrued in energy conservation and additional product manufactured can be an important factor on the operator's company's profit and loss sheet (7). Energy management analysis is a very important consideration in today's escalating climate of costs of energy. It is advisable to consider a thorough engineering inspection and evaluation of the entire plant to leave no stone unturned iii the search to reduce energy consumption (8). The cooling tower plays the major role on waste heat removal and should be given a thorough engineering inspection and evaluation by a specialist in this field. This can be performed at nominal cost and a formal report submitted with recommendations, budget costs, and evaluation of the thermal, structural, and mechanical condition of the equipment. This feasibility study will assist in determining the extent of efficiency improvement available with costs and projected savings. It can be stated that practically all cooling towers can be upgraded to perform at higher levels of efficiency which can provide a rapid, cost-effective payback. However, while all cooling tower systems might not provide such a dramatic cost payback as these case histories, the return of a customer's investment in upgrading his cooling tower can be a surprising factor of operation and should not be neglected

  10. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  11. Generation of Digital Surface Models from satellite photogrammetry: the DSM-OPT service of the ESA Geohazards Exploitation Platform (GEP)

    Science.gov (United States)

    Stumpf, André; Michéa, David; Malet, Jean-Philippe

    2017-04-01

    The continuously increasing fleet of agile stereo-capable very-high resolution (VHR) optical satellites has facilitated the acquisition of multi-view images of the earth surface. Theoretical revisit times have been reduced to less than one day and the highest spatial resolution which is commercially available amounts now to 30 cm/pixel. Digital Surface Models (DSM) and point clouds computed from such satellite stereo-acquisitions can provide valuable input for studies in geomorphology, tectonics, glaciology, hydrology and urban remote sensing The photogrammetric processing, however, still requires significant expertise, computational resources and costly commercial software. To enable a large Earth Science community (researcher and end-users) to process easily and rapidly VHR multi-view images, the work targets the implementation of a fully automatic satellite-photogrammetry pipeline (i.e DSM-OPT) on the ESA Geohazards Exploitation Platform (GEP). The implemented pipeline is based on the open-source photogrammetry library MicMac [1] and is designed for distributed processing on a cloud-based infrastructure. The service can be employed in pre-defined processing modes (i.e. urban, plain, hilly, and mountainous environments) or in an advanced processing mode (i.e. in which expert-users have the possibility to adapt the processing parameters to their specific applications). Four representative use cases are presented to illustrate the accuracy of the resulting surface models and ortho-images as well as the overall processing time. These use cases consisted of the construction of surface models from series of Pléiades images for four applications: urban analysis (Strasbourg, France), landslide detection in mountainous environments (South French Alps), co-seismic deformation in mountain environments (Central Italy earthquake sequence of 2016) and fault recognition for paleo-tectonic analysis (North-East India). Comparisons of the satellite-derived topography to airborne

  12. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  13. The Ethics of Exploitation

    Directory of Open Access Journals (Sweden)

    Paul McLaughlin

    2008-11-01

    Full Text Available Philosophical inquiry into exploitation has two major deficiencies to date: it assumes that exploitation is wrong by definition; and it pays too much attention to the Marxian account of exploitation. Two senses of exploitation should be distinguished: the ‘moral’ or pejorative sense and the ‘non-moral’ or ‘non-prejudicial’ sense. By demonstrating the conceptual inadequacy of exploitation as defined in the first sense, and by defining exploitation adequately in the latter sense, we seek to demonstrate the moral complexity of exploitation. We contend, moreover, that moral evaluation of exploitation is only possible once we abandon a strictly Marxian framework and attempt, in the long run, to develop an integral ethic along Godwinian lines.

  14. Impact of uranium exploitation by Cogema-Areva subsidiaries in Niger. Assessment of analyses performed by the CRIIRAD laboratory in 2004 and at the beginning of 2005

    International Nuclear Information System (INIS)

    2005-01-01

    After a description of the uranium exploitation context (involved companies, production) in Arlit, Niger, by Cogema-Areva subsidiary companies, this report describes the context of controls performed by the CRIIRAD laboratory. Then, it reports and comments the contamination of underground so-called drinkable waters (contamination risks, Cogema statement, detection of a rather high concentration of alpha emitters, measurements performed in 2004 and 2005). The authors notice that water contamination is known by Cogema. Then, the report analyzes the issue of contaminated scrap metal dispersal, comments and criticizes the attitude of Cogema with respect to the associated risks. It comments the uranate transport accident which occurred in February 2004, the subsequent contamination, actions performed by Cogema, the associated health risks, and the statements made by Areva and Cogema. It also comments and analyzes the risks related to radioactive radon dusts inhalation around different sites and because of some technical practices, and Cogema statements about this issue. In conclusion, the authors outline the need of reinforced controls and of an epidemiological study, and outlines how Areva propagates wrong ideas

  15. Modeling the differential incidence of "child abuse, neglect and exploitation" in poor households in South Africa: Focus on child trafficking

    CSIR Research Space (South Africa)

    Mbecke, P

    2010-06-01

    Full Text Available above hinders their care, protection and well- being. As a consequence of remarkable abuse, neglect and exploitation of children in South Africa, in 2005 Child Welfare South Africa (CWSA), an umbrella body representing 169 children?s organizations... (affiliates, branches and developing organizations) provided services to 108 379 children considered and defined by the Child Care Act as ?children in need of care?. Out of this number there were 5 000 physically abused children, 6 637 sexually abused...

  16. On a model of mixtures with internal variables: Extended Liu procedure for the exploitation of the entropy principle

    Directory of Open Access Journals (Sweden)

    Francesco Oliveri

    2016-01-01

    Full Text Available The exploitation of second law of thermodynamics for a mixture of two fluids with a scalar internal variable and a first order nonlocal state space is achieved by using the extended Liu approach. This method requires to insert as constraints in the entropy inequality either the field equations or their gradient extensions. Consequently, the thermodynamic restrictions imposed by the entropy principle are derived without introducing extra terms neither in the energy balance equation nor in the entropy inequality.

  17. Exploitation and exploration dynamics in recessionary times

    OpenAIRE

    Walrave, B.

    2012-01-01

    Firm performance largely depends on the ability to adapt to, and exploit, changes in the business environment. That is, firms should maintain ecological fitness by reconfiguring their resource base to cope with emerging threats and explore new opportunities, while at the same time exploiting existing resources. As such, firms possessing the ability to simultaneously perform exploitative and explorative initiatives are more resilient. In this respect, the performance implications of balancing ...

  18. Off gas condenser performance modelling

    International Nuclear Information System (INIS)

    Cains, P.W.; Hills, K.M.; Waring, S.; Pratchett, A.G.

    1989-12-01

    A suite of three programmes has been developed to model the ruthenium decontamination performance of a vitrification plant off-gas condenser. The stages of the model are: condensation of water vapour, NO x absorption in the condensate, RuO 4 absorption in the condensate. Juxtaposition of these stages gives a package that may be run on an IBM-compatible desktop PC. Experimental work indicates that the criterion [HNO 2 ] > 10 [RuO 4 ] used to determine RuO 4 destruction in solution is probably realistic under condenser conditions. Vapour pressures of RuO 4 over aqueous solutions at 70 o -90 o C are slightly lower than the values given by extrapolating the ln K p vs. T -1 relation derived from lower temperature data. (author)

  19. Data harmonization and model performance

    Science.gov (United States)

    The Joint Committee on Urban Storm Drainage of the International Association for Hydraulic Research (IAHR) and International Association on Water Pollution Research and Control (IAWPRC) was formed in 1982. The current committee members are (no more than two from a country): B. C. Yen, Chairman (USA); P. Harremoes, Vice Chairman (Denmark); R. K. Price, Secretary (UK); P. J. Colyer (UK), M. Desbordes (France), W. C. Huber (USA), K. Krauth (FRG), A. Sjoberg (Sweden), and T. Sueishi (Japan).The IAHR/IAWPRC Joint Committee is forming a Task Group on Data Harmonization and Model Performance. One objective is to promote international urban drainage data harmonization for easy data and information exchange. Another objective is to publicize available models and data internationally. Comments and suggestions concerning the formation and charge of the Task Group are welcome and should be sent to: B. C. Yen, Dept. of Civil Engineering, Univ. of Illinois, 208 N. Romine St., Urbana, IL 61801.

  20. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  1. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  2. Exploitation by Economic Necessity

    Directory of Open Access Journals (Sweden)

    Kristian F. Braekkan

    2015-10-01

    Full Text Available This study develops and tests a model that proposes economic necessity moderates the relationship between psychological contract violations (PCVs and organizational commitment and trust in the employing organization among non-unionized manufacturing workers (N = 226. Moderated regression analyses revealed that there was a significant interaction between PCV and economic necessity in predicting both outcomes. Specifically, the findings indicated that individuals experiencing high PCV and high economic necessity did not decrease their organizational commitment like their counterparts who endorsed lower economic necessity. They did, however, experience significantly decreased trust in their employer. The findings suggest that individuals who are forced to sell their labor power and obtain what they need through the market are more likely to continue to be exploited by their employer, as they have no other options than to continue the relationship. The importance of the findings is discussed, and recommendations for future research are provided.

  3. Surface damage mitigation of TC4 alloy via micro arc oxidation for oil and gas exploitation application: Characterizations of microstructure and evaluations on surface performance

    Science.gov (United States)

    Xie, Ruizhen; Lin, Naiming; Zhou, Peng; Zou, Jiaojuan; Han, Pengju; Wang, Zhihua; Tang, Bin

    2018-04-01

    Because of its excellent corrosion resistance, high specific strength and high tensile strength, TC4 titanium alloys used as petroleum tubes have received wide interest from material engineers after many technical investigations and estimations. However, because of its low surface hardness values, high coefficient of friction and poor wear resistance, the TC4 alloy is seldom adopted in tribological-related engineering components. In this work, micro-arc oxidation (MAO) coatings were fabricated on TC4 alloys in NaAlO2 and (NaPO3)6 electrolytes with and without ultrasonic assistance. The microstructural characterizations of the produced MAO coatings were investigated. Comparative estimations of electrochemical corrosion in CO2-saturated simulated oilfield brine and tribological behaviours on MAO coatings and TC4 alloys were conducted. The results showed that the introduction of ultrasound increased the thickness of the MAO coatings. The thickness increased by 34% and 15% in the NaAlO2 and (NaPO3)6 electrolytes, respectively. There was no significant discrepancy in phase constitutions when the MAO processes were conducted with and without ultrasonic assistance. Both MAO coatings obtained with and without ultrasonic assistance were found to improve the corrosion and wear resistance of the TC4 alloy. MAO treatments made it possible to ensure the working surface of a TC4 alloy with an enhanced surface performance for oil and gas exploitation applications.

  4. Study on the stress changes due to the regional groundwater exploitation based on a 3-D fully coupled poroelastic model: An example of the North China Plain

    Science.gov (United States)

    Cheng, H.; Zhang, H.; Pang, Y. J.; Shi, Y.

    2017-12-01

    With the quick urban development, over-exploitation of groundwater resources becomes more and more intense, which leads to not only widespread groundwater depression cones but also a series of harsh environmental and geological hazards. Among which, the most intuitive phenomenon is the ground subsidence in loose sediments. However, another direct consequence triggered by the groundwater depletion is the substantial crustal deformation and potential modulation of crustal stress underneath the groundwater over-pumping zones. In our previous 3-D viscoelastic finite element model, we found that continuous over-exploitation of groundwater resources in North China Plain during the past 60 years give rise to crustal-scale uplift reaching 4.9cm, with the Coulomb failure stress decreasing by up to 12 kPa, which may inhibit the nucleation of possible big earthquake events. Furthermore, according to the effective pressure principle and lab experiments, the pore pressure may also have changed due to the reduced water level. In order to quantitatively analyze the stress changes due to the regional groundwater exploitation in North China Plain, a three-dimensional fully coupled poroelastic finite element model is developed in this study. The high resolution topography, grounwater level fluctuation, fault parameters and etc, are taken into consideration. Further, the changes of Coulomb Failure Stress, in correspondence to elastic stress and pore pressure changes induced by fluid diffusion are calculated. Meanwhile, the elastic strain energy accumulation in region due to the regional groundwater exploitation is obtained. Finally, we try to analyze the seismic risk of major faults within North China Plain to further discuss the regional seismic activities.

  5. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  6. Exploit Kit traffic analysis

    OpenAIRE

    Καπίρης, Σταμάτης; Kapiris, Stamatis

    2017-01-01

    Exploit kits have become one of the most widespread and destructive threat that Internet users face on a daily basis. Since the first actor, which has been categorized as exploit kit, namely MPack, appeared in 2006, we have seen a new era on exploit kit variants compromising popular websites, infecting hosts and delivering destructive malware, following an exponentially evolvement to date. With the growing threat landscape, large enterprises to domestic networks, have starte...

  7. Exploitability Assessment with TEASER

    Science.gov (United States)

    2017-05-01

    for architectural neutral taint analysis on top of LLVM and QEMU. POC Proof of Concept : Demonstration of an exploit on a program . vii RCE Remote Code...bug with a Proof of Concept (POC), or input to a program demonstrating the ability to use a bug to exploit the application, to demonstrate the...often leads to either computationally difficult constraint solving problems or taint explosion. Given the computational difficulty of exploit

  8. Anthropology of sexual exploitation

    Directory of Open Access Journals (Sweden)

    Lalić Velibor

    2009-01-01

    Full Text Available In this paper, the authors observe sexual exploitation from an anthropological perspective. They analyze the rational, ethical, emotional and mythological dimensions of human sexuality. Consequently, after setting the phenomenon in a social and historical context, sexual exploitation is closely observed in the contemporary age. Based on thoughts of relevant thinkers, they make the conclusion that the elimination of sexual exploitation is not an utterly legal issue, but political and economical issues as well. Namely, legal norms are not sufficient to overcome sexual exploitation, but, political and economical relationships in contemporary societies, which will be based on sincere equal opportunities must be established.

  9. Exploiting the Vulnerability of Flow Table Overflow in Software-Defined Network: Attack Model, Evaluation, and Defense

    Directory of Open Access Journals (Sweden)

    Yadong Zhou

    2018-01-01

    Full Text Available As the most competitive solution for next-generation network, SDN and its dominant implementation OpenFlow are attracting more and more interests. But besides convenience and flexibility, SDN/OpenFlow also introduces new kinds of limitations and security issues. Of these limitations, the most obvious and maybe the most neglected one is the flow table capacity of SDN/OpenFlow switches. In this paper, we proposed a novel inference attack targeting at SDN/OpenFlow network, which is motivated by the limited flow table capacities of SDN/OpenFlow switches and the following measurable network performance decrease resulting from frequent interactions between data and control plane when the flow table is full. To the best of our knowledge, this is the first proposed inference attack model of this kind for SDN/OpenFlow. We implemented an inference attack framework according to our model and examined its efficiency and accuracy. The evaluation results demonstrate that our framework can infer the network parameters (flow table capacity and usage with an accuracy of 80% or higher. We also proposed two possible defense strategies for the discovered vulnerability, including routing aggregation algorithm and multilevel flow table architecture. These findings give us a deeper understanding of SDN/OpenFlow limitations and serve as guidelines to future improvements of SDN/OpenFlow.

  10. Exploitation and disadvantage

    NARCIS (Netherlands)

    Ferguson, B.

    2016-01-01

    According to some accounts of exploitation, most notably Ruth Sample's (2003) degradation-based account and Robert Goodin's (1987) vulnerability-based account, exploitation occurs when an advantaged party fails to constrain their advantage in light of another's disadvantage, regardless of the cause

  11. EXPLOITATION OF GRANITE BOULDER

    Directory of Open Access Journals (Sweden)

    Ivan Cotman

    1994-12-01

    Full Text Available The processes of forming, petrography, features, properties and exploitation of granite boulders are described. The directional drilling and black powder blasting is the succesful method in exploitation of granite boulders (boulder technology (the paper is published in Croatian.

  12. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  13. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-01-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel's degrees of freedom in the elevation

  14. Performance Modelling of Steam Turbine Performance using Fuzzy ...

    African Journals Online (AJOL)

    Performance Modelling of Steam Turbine Performance using Fuzzy Logic ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Journal of Applied Sciences and Environmental Management ... A Fuzzy Inference System for predicting the performance of steam turbine

  15. Poverty-Exploitation-Alienation.

    Science.gov (United States)

    Bronfenbrenner, Martin

    1980-01-01

    Illustrates how knowledge derived from the discipline of economics can be used to help shed light on social problems such as poverty, exploitation, and alienation, and can help decision makers form policy to minimize these and similar problems. (DB)

  16. Exploiting VM/XA

    International Nuclear Information System (INIS)

    Boeheim, C.

    1990-03-01

    The Stanford Linear Accelerator Center has recently completed a conversion to IBM's VM/XA SP Release 2 operating system. The primary physics application had been constrained by the previous 16 megabyte memory limit. Work is underway to enable this application to exploit the new features of VM/XA. This paper presents a brief tutorial on how to convert an application to exploit VM/XA and discusses some of the SLAC experiences in doing so. 13 figs

  17. Teotihuacan, tepeapulco, and obsidian exploitation.

    Science.gov (United States)

    Charlton, T H

    1978-06-16

    Current cultural ecological models of the development of civilization in central Mexico emphasize the role of subsistence production techniques and organization. The recent use of established and productive archeological surface survey techniques along natural corridors of communication between favorable niches for cultural development within the Central Mexican symbiotic region resulted in the location of sites that indicate an early development of a decentralized resource exploitation, manufacturing, and exchange network. The association of the development of this system with Teotihuacán indicates the importance such nonsubsistence production and exchange had in the evolution of this first central Mexican civilization. The later expansion of Teotihuacán into more distant areas of Mesoamerica was based on this resource exploitation model. Later civilizations centered at Tula and Tenochtitlán also used such a model in their expansion.

  18. Exploitation and exploration dynamics in recessionary times

    NARCIS (Netherlands)

    Walrave, B.

    2012-01-01

    Firm performance largely depends on the ability to adapt to, and exploit, changes in the business environment. That is, firms should maintain ecological fitness by reconfiguring their resource base to cope with emerging threats and explore new opportunities, while at the same time exploiting

  19. Student Modeling in Orthopedic Surgery Training: Exploiting Symbiosis between Temporal Bayesian Networks and Fine-Grained Didactic Analysis

    Science.gov (United States)

    Chieu, Vu Minh; Luengo, Vanda; Vadcard, Lucile; Tonetti, Jerome

    2010-01-01

    Cognitive approaches have been used for student modeling in intelligent tutoring systems (ITSs). Many of those systems have tackled fundamental subjects such as mathematics, physics, and computer programming. The change of the student's cognitive behavior over time, however, has not been considered and modeled systematically. Furthermore, the…

  20. Modeling and simulation of stamp deflections in nanoimprint lithography: Exploiting backside grooves to enhance residual layer thickness uniformity

    DEFF Research Database (Denmark)

    Taylor, Hayden; Smistrup, Kristian; Boning, Duane

    2011-01-01

    We describe a model for the compliance of a nanoimprint stamp etched with a grid of backside grooves. We integrate the model with a fast simulation technique that we have previously demonstrated, to show how etched grooves help reduce the systematic residual layer thickness (RLT) variations...

  1. The trends of modeling the ways of formation, distribution and exploitation of megapolis lands using geo-information systems

    Directory of Open Access Journals (Sweden)

    Kostyantyn Mamonov

    2017-10-01

    Full Text Available The areas of need for ways of modeling the formation, distribution and use of land metropolis using GIS are identified. The article is to define the areas of modeling ways of formation, distribution and use of land metropolis using GIS. In the study, the following objectives are set: to develop an algorithm process data base (Data System creation for pecuniary valuation of land settlements with the use of GIS; to offer process model taking into account the influence of one factor modules using geographic information systems; to identify components of geo providing expert money evaluation of land metropolis; to describe the general procedure for expert money assessment of land and property by using geographic information system software; to develop an algorithm methods for expert evaluation of land. Identified tools built algorithms used for modeling the ways of formation, distribution and use of land metropolis using GIS. Directions ways of modeling the formation, distribution and use of land metropolis using GIS.

  2. Photovoltaic array performance simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  3. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  4. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  5. Exploiting maximum energy from variable speed wind power generation systems by using an adaptive Takagi-Sugeno-Kang fuzzy model

    International Nuclear Information System (INIS)

    Galdi, V.; Piccolo, A.; Siano, P.

    2009-01-01

    Nowadays, incentives and financing options for developing renewable energy facilities and the new development in variable speed wind technology make wind energy a competitive source if compared with conventional generation ones. In order to improve the effectiveness of variable speed wind systems, adaptive control systems able to cope with time variances of the system under control are necessary. On these basis, a data driven designing methodology for TSK fuzzy models design is presented in this paper. The methodology, on the basis of given input-output numerical data, generates the 'best' TSK fuzzy model able to estimate with high accuracy the maximum extractable power from a variable speed wind turbine. The design methodology is based on fuzzy clustering methods for partitioning the input-output space combined with genetic algorithms (GA), and recursive least-squares (LS) optimization methods for model parameter adaptation

  6. Key Aspects of the Proper Formulation of the Model in Numerical Analysis of the Influence of Mining Exploitation on Buildings

    Directory of Open Access Journals (Sweden)

    Florkowska Lucyna

    2015-02-01

    Full Text Available Numerical modelling is an important tool used to analyse various aspects of the impact of underground mining on existing and planned buildings. The interaction between the building and the soil is a complex matter and in many cases a numerical simulation is the only way of making calculations which will take into consideration the co–existence of a number of factors which have a significant influence on the solution. The complexity of the matter also makes it a difficult task to elaborate a proper mathematical model – it requires both a thorough knowledge of geologic conditions of the subsoil and the structural characteristics of the building.

  7. Method of approximate electric modeling of oil reservoir operation with formation of a gas cap during mixed exploitation regime

    Energy Technology Data Exchange (ETDEWEB)

    Bragin, V A; Lyadkin, V Ya

    1969-01-01

    A potentiometric model is used to simulate the behavior of a reservoir in which pressure was dropped rapidly and solution gas migrated to the top of the structure forming a gas cap. Behavior of the system was represented by a differential equation, which was solved by an electrointegrator. The potentiometric model was found to closely represent past history of the reservoir, and to predict its future behavior. When this method is used in reservoirs where large pressure drops occur, repeated determination should be made at various time intervals, so that changes in relative permeability are taken into account.

  8. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-09-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  9. Tumor hypoxia - A confounding or exploitable factor in interstitial brachytherapy? Effects of tissue trauma in an experimental rat tumor model

    NARCIS (Netherlands)

    van den Berg, AP; van Geel, CAJF; van Hooije, CMC; van der Kleij, AJ; Visser, AG

    2000-01-01

    Purpose: To evaluate the potential effects of tumor hypoxia induced by afterloading catheter implantation on the effectiveness of brachytherapy in a rat tumor model. Methods and Materials: Afterloading catheters (4) Here implanted in subcutaneously growing R1M rhabdomyosarcoma in female Wag/Rij

  10. Period doubling cascades of prey-predator model with nonlinear harvesting and control of over exploitation through taxation

    Science.gov (United States)

    Gupta, R. P.; Banerjee, Malay; Chandra, Peeyush

    2014-07-01

    The present study investigates a prey predator type model for conservation of ecological resources through taxation with nonlinear harvesting. The model uses the harvesting function as proposed by Agnew (1979) [1] which accounts for the handling time of the catch and also the competition between standard vessels being utilized for harvesting of resources. In this paper we consider a three dimensional dynamic effort prey-predator model with Holling type-II functional response. The conditions for uniform persistence of the model have been derived. The existence and stability of bifurcating periodic solution through Hopf bifurcation have been examined for a particular set of parameter value. Using numerical examples it is shown that the system admits periodic, quasi-periodic and chaotic solutions. It is observed that the system exhibits periodic doubling route to chaos with respect to tax. Many forms of complexities such as chaotic bands (including periodic windows, period-doubling bifurcations, period-halving bifurcations and attractor crisis) and chaotic attractors have been observed. Sensitivity analysis is carried out and it is observed that the solutions are highly dependent to the initial conditions. Pontryagin's Maximum Principle has been used to obtain optimal tax policy to maximize the monetary social benefit as well as conservation of the ecosystem.

  11. A Practical Model to Perform Comprehensive Cybersecurity Audits

    Directory of Open Access Journals (Sweden)

    Regner Sabillon

    2018-03-01

    Full Text Available These days organizations are continually facing being targets of cyberattacks and cyberthreats; the sophistication and complexity of modern cyberattacks and the modus operandi of cybercriminals including Techniques, Tactics and Procedures (TTP keep growing at unprecedented rates. Cybercriminals are always adopting new strategies to plan and launch cyberattacks based on existing cybersecurity vulnerabilities and exploiting end users by using social engineering techniques. Cybersecurity audits are extremely important to verify that information security controls are in place and to detect weaknesses of inexistent cybersecurity or obsolete controls. This article presents an innovative and comprehensive cybersecurity audit model. The CyberSecurity Audit Model (CSAM can be implemented to perform internal or external cybersecurity audits. This model can be used to perform single cybersecurity audits or can be part of any corporate audit program to improve cybersecurity controls. Any information security or cybersecurity audit team has either the options to perform a full audit for all cybersecurity domains or by selecting specific domains to audit certain areas that need control verification and hardening. The CSAM has 18 domains; Domain 1 is specific for Nation States and Domains 2-18 can be implemented at any organization. The organization can be any small, medium or large enterprise, the model is also applicable to any Non-Profit Organization (NPO.

  12. Groundwater – Geothermal preliminary model of the Acque Albule Basin (Rome: future perspectives of geothermal resources exploitation

    Directory of Open Access Journals (Sweden)

    Francesco La Vigna

    2013-12-01

    Full Text Available This work presents the preliminary results of a groundwater and geothermal model applied to the hydrothermal system of the Tivoli- Guidonia plain, located in the east surroundings of Rome. This area, which is characterized by a thick outcropping travertine deposit, has been an important quarry extraction area since roman age. Today the extraction is in deepening helped by a large dewatering action. By an hydrogeological point of view, the travertine aquifer of the Tivoli- Guidonia Plain, is recharged by lateral discharge in the Lucretili and Cornicolani Mts., and by piping trough important regional faults, located in the basal aquiclude, in the central area of the basin. Piping hydrothermal groundwater is the main contribution on flow in the basin. Preliminary simulations of the groundwater-geothermal model, reproduce quite well the heat and mineralization plumes of groundwater observed in the travertine aquifer.

  13. Exploiting the time-dynamics of news diffusion on the Internet through a generalized Susceptible-Infected model

    Science.gov (United States)

    De Martino, Giuseppe; Spina, Serena

    2015-11-01

    We construct a news spreading model with a time dependent contact rate which generalizes the classical Susceptible-Infected model of epidemiology. In particular, we are interested on the time-dynamics of the sharing and diffusion process of news on the Internet. We focus on the counting process describing the number of connections to a given website, characterizing the cumulative density function of its inter-arrival times. Moreover, starting from the general form of the finite dimensional distribution of the process, we determine a formula for the time-variable rate of the connections and establish its relationship with the probability density function of the interarrival times. We finally show the effectiveness of our theoretical framework analyzing a real-world dataset, the Memetracker dataset, whose parameters characterizing the diffusion process are determined.

  14. Exploiting Soil Moisture, Precipitation, and Streamflow Observations to Evaluate Soil Moisture/Runoff Coupling in Land Surface Models

    Science.gov (United States)

    Crow, W. T.; Chen, F.; Reichle, R. H.; Xia, Y.; Liu, Q.

    2018-05-01

    Accurate partitioning of precipitation into infiltration and runoff is a fundamental objective of land surface models tasked with characterizing the surface water and energy balance. Temporal variability in this partitioning is due, in part, to changes in prestorm soil moisture, which determine soil infiltration capacity and unsaturated storage. Utilizing the National Aeronautics and Space Administration Soil Moisture Active Passive Level-4 soil moisture product in combination with streamflow and precipitation observations, we demonstrate that land surface models (LSMs) generally underestimate the strength of the positive rank correlation between prestorm soil moisture and event runoff coefficients (i.e., the fraction of rainfall accumulation volume converted into stormflow runoff during a storm event). Underestimation is largest for LSMs employing an infiltration-excess approach for stormflow runoff generation. More accurate coupling strength is found in LSMs that explicitly represent subsurface stormflow or saturation-excess runoff generation processes.

  15. Exploiting amoeboid and non-vertebrate animal model systems to study the virulence of human pathogenic fungi.

    Science.gov (United States)

    Mylonakis, Eleftherios; Casadevall, Arturo; Ausubel, Frederick M

    2007-07-27

    Experiments with insects, protozoa, nematodes, and slime molds have recently come to the forefront in the study of host-fungal interactions. Many of the virulence factors required for pathogenicity in mammals are also important for fungal survival during interactions with non-vertebrate hosts, suggesting that fungal virulence may have evolved, and been maintained, as a countermeasure to environmental predation by amoebae and nematodes and other small non-vertebrates that feed on microorganisms. Host innate immune responses are also broadly conserved across many phyla. The study of the interaction between invertebrate model hosts and pathogenic fungi therefore provides insights into the mechanisms underlying pathogen virulence and host immunity, and complements the use of mammalian models by enabling whole-animal high throughput infection assays. This review aims to assist researchers in identifying appropriate invertebrate systems for the study of particular aspects of fungal pathogenesis.

  16. Exploiting amoeboid and non-vertebrate animal model systems to study the virulence of human pathogenic fungi.

    Directory of Open Access Journals (Sweden)

    Eleftherios Mylonakis

    2007-07-01

    Full Text Available Experiments with insects, protozoa, nematodes, and slime molds have recently come to the forefront in the study of host-fungal interactions. Many of the virulence factors required for pathogenicity in mammals are also important for fungal survival during interactions with non-vertebrate hosts, suggesting that fungal virulence may have evolved, and been maintained, as a countermeasure to environmental predation by amoebae and nematodes and other small non-vertebrates that feed on microorganisms. Host innate immune responses are also broadly conserved across many phyla. The study of the interaction between invertebrate model hosts and pathogenic fungi therefore provides insights into the mechanisms underlying pathogen virulence and host immunity, and complements the use of mammalian models by enabling whole-animal high throughput infection assays. This review aims to assist researchers in identifying appropriate invertebrate systems for the study of particular aspects of fungal pathogenesis.

  17. Dissemination and Exploitation Strategy

    DEFF Research Database (Denmark)

    Badger, Merete; Monaco, Lucio; Fransson, Torsten

    of Technology in Sweden, Politecnico di Torino in Italy, and Eindhoven University of Technology in the Netherlands. The project is partially funded by the European Commission under the 7th Framework Programme (project no. RI-283746). This report describes the final dissemination and exploitation strategy...... for project Virtual Campus Hub. A preliminary dissemination and exploitation plan was setup early in the project as described in the deliverable D6.1 Dissemination strategy paper - preliminary version. The plan has been revised on a monthly basis during the project’s lifecycle in connection with the virtual...

  18. Linking Genomo- and Pathotype: Exploiting the Zebrafish Embryo Model to Investigate the Divergent Virulence Potential among Cronobacter spp.

    Directory of Open Access Journals (Sweden)

    Athmanya K Eshwar

    Full Text Available Bacteria belonging to the genus Cronobacter have been recognized as causative agents of life-threatening systemic infections primarily in premature, low-birth weight and immune-compromised neonates. Apparently not all Cronobacter species are linked to infantile infections and it has been proposed that virulence varies among strains. Whole genome comparisons and in silico analysis have proven to be powerful tools in elucidating potential virulence determinants, the presence/absence of which may explain the differential virulence behaviour of strains. However, validation of these factors has in the past been hampered by the availability of a suitable neonatal animal model. In the present study we have used zebrafish embryos to model Cronobacter infections in vivo using wild type and genetically engineered strains. Our experiments confirmed the role of the RepF1B-like plasmids as "virulence plasmids" in Cronobacter and underpinned the importantce of two putative virulence factors-cpa and zpx-in in vivo pathogenesis. We propose that by using this model in vivo infection studies are now possible on a large scale level which will boost the understanding on the virulence strategies employed by these pathogens.

  19. Using a spatially structured life cycle model to assess the influence of multiple stressors on an exploited coastal-nursery-dependent population

    Science.gov (United States)

    Archambault, B.; Rivot, E.; Savina, M.; Le Pape, O.

    2018-02-01

    Exploited coastal-nursery-dependent fish species are subject to various stressors occurring at specific stages of the life cycle: climate-driven variability in hydrography determines the success of the first eggs/larvae stages; coastal nursery habitat suitability controls juvenile growth and survival; and fisheries target mostly adults. A life cycle approach was used to quantify the relative influence of these stressors on the Eastern English Channel (EEC) population of the common sole (Solea solea), a coastal-nursery-dependent flatfish population which sustains important fisheries. The common sole has a complex life cycle: after eggs hatch, larvae spend several weeks drifting in open water. Survivors go on to metamorphose into benthic fish. Juveniles spend the first two years of their life in coastal and estuarine nurseries. Close to maturation, they migrate to deeper areas, where different subpopulations supplied by different nurseries reproduce and are exploited by fisheries. A spatially structured age-and stage-based hierarchical Bayesian model integrating various aspects of ecological knowledge, data sources and expert knowledge was built to quantitatively describe this complex life cycle. The model included the low connectivity among three subpopulations in the EEC, the influence of hydrographic variability, the availability of suitable juvenile habitat and fisheries. Scenarios were designed to quantify the effects of interacting stressors on population renewal. Results emphasized the importance of coastal nursery habitat availability and quality for the population renewal. Realistic restoration scenarios of the highly degraded Seine estuary produced a two-third increase in catch potential for the adjacent subpopulation. Fisheries, however, remained the main source of population depletion. Setting fishing mortality to the maximum sustainable yield led to substantial increases in biomass (+100%) and catch (+33%) at the EEC scale. The approach also showed how

  20. Modelling the sequential geographical exploitation and potential collapse of marine fisheries through economic globalization, climate change and management alternatives

    Directory of Open Access Journals (Sweden)

    Gorka Merino

    2011-07-01

    Full Text Available Global marine fisheries production has reached a maximum and may even be declining. Underlying this trend is a well-understood sequence of development, overexploitation, depletion and in some instances collapse of individual fish stocks, a pattern that can sequentially link geographically distant populations. Ineffective governance, economic considerations and climate impacts are often responsible for this sequence, although the relative contribution of each factor is contentious. In this paper we use a global bioeconomic model to explore the synergistic effects of climate variability, economic pressures and management measures in causing or avoiding this sequence. The model shows how a combination of climate-induced variability in the underlying fish population production, particular patterns of demand for fish products and inadequate management is capable of driving the world’s fisheries into development, overexploitation, collapse and recovery phases consistent with observations. Furthermore, it demonstrates how a sequential pattern of overexploitation can emerge as an endogenous property of the interaction between regional environmental fluctuations and a globalized trade system. This situation is avoidable through adaptive management measures that ensure the sustainability of regional production systems in the face of increasing global environmental change and markets. It is concluded that global management measures are needed to ensure that global food supply from marine products is optimized while protecting long-term ecosystem services across the world’s oceans.

  1. The economics of exploiting gas hydrates

    International Nuclear Information System (INIS)

    Döpke, Lena-Katharina; Requate, Till

    2014-01-01

    We investigate the optimal exploitation of methane hydrates, a recent discovery of methane resources under the sea floor, mainly located along the continental margins. Combustion of methane (releasing CO2) and leakage through blow-outs (releasing CH4) contribute to the accumulation of greenhouse gases. A second externality arises since removing solid gas hydrates from the sea bottom destabilizes continental margins and thus increases the risk of marine earthquakes. We show that in such a model three regimes can occur: i) resource exploitation will be stopped in finite time, and some of the resource will stay in situ, ii) the resource will be used up completely in finite time, and iii) the resource will be exhausted in infinite time. We also show how to internalize the externalities by policy instruments. - Highlights: • We set up a model of optimal has hydrate exploitation • We incorporate to types of damages: contribution to global warming and geo-hazards • We characterize optimal exploitation paths and study decentralization with an exploitation tax. • Three regimes can occur: • i) exploitation in finite time and some of the stock remaining in situ, • ii) exploitation in finite time and the resource will be exhausted, • iii) exploitation and exhaustion in infinite time

  2. Exploitation of linkage learning in evolutionary algorithms

    CERN Document Server

    Chen, Ying-ping

    2010-01-01

    The exploitation of linkage learning is enhancing the performance of evolutionary algorithms. This monograph examines recent progress in linkage learning, with a series of focused technical chapters that cover developments and trends in the field.

  3. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  4. Network exploitation using WAMI tracks

    Science.gov (United States)

    Rimey, Ray; Record, Jim; Keefe, Dan; Kennedy, Levi; Cramer, Chris

    2011-06-01

    Creating and exploiting network models from wide area motion imagery (WAMI) is an important task for intelligence analysis. Tracks of entities observed moving in the WAMI sensor data are extracted, then large numbers of tracks are studied over long time intervals to determine specific locations that are visited (e.g., buildings in an urban environment), what locations are related to other locations, and the function of each location. This paper describes several parts of the network detection/exploitation problem, and summarizes a solution technique for each: (a) Detecting nodes; (b) Detecting links between known nodes; (c) Node attributes to characterize a node; (d) Link attributes to characterize each link; (e) Link structure inferred from node attributes and vice versa; and (f) Decomposing a detected network into smaller networks. Experimental results are presented for each solution technique, and those are used to discuss issues for each problem part and its solution technique.

  5. Long Term Association of Tropospheric Trace gases over Pakistan by exploiting satellite observations and development of Econometric Regression based Model

    Science.gov (United States)

    Zeb, Naila; Fahim Khokhar, Muhammad; Khan, Saud Ahmed; Noreen, Asma; Murtaza, Rabbia

    2017-04-01

    . Furthermore to explore causal relation, regression analysis is employed to estimate model for CO and TOC. This model numerically estimated the long term association of trace gases over the region.

  6. Work domain constraints for modelling surgical performance.

    Science.gov (United States)

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  7. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  8. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  9. Exploration, Exploitation, and Organizational Coordination Mechanisms

    Directory of Open Access Journals (Sweden)

    Silvio Popadiuk

    2016-03-01

    Full Text Available This paper presents an empirical relationship among exploration, exploitation, and organizational coordination mechanisms, classified as the centralization of decision-making, formalization, and connectedness. In order to analyze the findings of this survey, we used two techniques: Principal Component Analysis (PCA and Partial Least Squares Path Modeling (PLS-PM. Our analysis was supported by 249 answers from managers of companies located in Brazil (convenience sampling. Contrary to expectations, centralization and exploitation were negatively associated. Our data supports the research hypothesis that formalization is positively associated with exploitation. Although the relationship between formalization and exploration were significant, the result is contrary to the research hypothesis that we made. The relationships among connectedness and exploitation, and connectedness and exploration were both positive and significant. This relationship means that the more connectedness increases, the higher the likelihood of exploitation and exploration.

  10. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  11. Socio-economic research on fusion: SERF 2 (1999-2000). Task 1: Externalities of fusion. Exploitation and improvement of work performed under SERF 1

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, T.; Lepicard, S. [Centre d' Etude sur l' Evaluation de la Protection dans le Domaine Nucleaire, 92 - Fontenay-aux-Roses (France); Hamacher, T. [Euratom/IPP Fusion Association (Germany); Hallberg, B.; Aquilonius, K. [Studsvik Eco and Safety AB, Association Euratom-NFR (Sweden); Ward, D. [Euratom/UKAEA Fusion Association (United Kingdom); Korhonen, R. [VTT, Association Euratom-Tekes (Finland); Lechon, Y.; Cabal, H.; Saez, R. [Euratom/CIEMAT Fusion Association (Spain)

    2001-01-01

    In the previous phase of the SERF project an assessment of the external costs of two conceptual models of a fusion power plant was performed, as well as a comparison with other energy options. Results obtained ranged from 1,29 mEURO/kWh to 2,71 mEURO/kWh for the two models analysed respectively, well below those obtained for fossil-fuelled power and nuclear fission power plants confirming the role of fusion as a sustainable energy source in the long term. Some elements were identified as the predominant cause of external costs. The most important of them was collective doses produced by the global dispersion of C-14. Additional work has been carried out in the framework of the SEAFP (Safety and Environmental Assessment of Fusion Power) and SEAL within SEAFP-2 programme. In the present phase of the SERF project the effects of all of these technological advances in the external costs of fusion power have been evaluated. An analysis of the key variables influencing the external cost aiming to set some recommendations for the design of fusion power plants with minimum external costs has been also carried out. Furthermore, the effects of a scenario of intensive use of fusion power to meet energy requirement in future have been analysed in terms of its incidence in global radiation level and global warming. (author)

  12. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  13. Performance of different radiotherapy workload models

    International Nuclear Information System (INIS)

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  14. Electric properties of organic and mineral electronic components, design and modelling of a photovoltaic chain for a better exploitation of the solar energy

    International Nuclear Information System (INIS)

    Aziz, A.

    2006-11-01

    The research carried out in this thesis relates to the mineral, organic electronic components and the photovoltaic systems. Concerning the mineral semiconductors, we modelled the conduction properties of the structures metal/oxide/semiconductor (MOS) strongly integrated in absence and in the presence of charges. We proposed a methodology allowing characterizing the ageing of structures MOS under injection of the Fowler Nordheim (FN) current type. Then, we studied the Schottky diodes in polymers of type metal/polymer/metal. We concluded that: The mechanism of the charges transfer, through the interface metal/polymer, is allotted to the thermo-ionic effect and could be affected by the lowering of the potential barrier to the interface metal/polymer. In the area of photovoltaic energy, we conceived and modelled a photovoltaic system of average power (100 W). We showed that the adaptation of the generator to the load allows a better exploitation of solar energy. This is carried out by the means of the converters controlled by an of type MPPT control provided with a detection circuit of dysfunction and restarting of the system. (author)

  15. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  16. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  17. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  18. Driver Performance Model: 1. Conceptual Framework

    National Research Council Canada - National Science Library

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  19. Performance engineering in the community atmosphere model

    International Nuclear Information System (INIS)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-01-01

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years

  20. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  1. Hacking the art of exploitation

    CERN Document Server

    Erickson, Jon

    2003-01-01

    A comprehensive introduction to the techniques of exploitation and creative problem-solving methods commonly referred to as "hacking," Hacking: The Art of Exploitation is for both technical and non-technical people who are interested in computer security. It shows how hackers exploit programs and write exploits, instead of just how to run other people's exploits. Unlike many so-called hacking books, this book explains the technical aspects of hacking, including stack based overflows, heap based overflows, string exploits, return-into-libc, shellcode, and cryptographic attacks on 802.11b.

  2. Performability assessment by model checking of Markov reward models

    NARCIS (Netherlands)

    Baier, Christel; Cloth, L.; Haverkort, Boudewijn R.H.M.; Hermanns, H.; Katoen, Joost P.

    2010-01-01

    This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility

  3. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  4. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  5. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  6. Operational monitoring and forecasting of bathing water quality through exploiting satellite Earth observation and models: The AlgaRisk demonstration service

    Science.gov (United States)

    Shutler, J. D.; Warren, M. A.; Miller, P. I.; Barciela, R.; Mahdon, R.; Land, P. E.; Edwards, K.; Wither, A.; Jonas, P.; Murdoch, N.; Roast, S. D.; Clements, O.; Kurekin, A.

    2015-04-01

    Coastal zones and shelf-seas are important for tourism, commercial fishing and aquaculture. As a result the importance of good water quality within these regions to support life is recognised worldwide and a number of international directives for monitoring them now exist. This paper describes the AlgaRisk water quality monitoring demonstration service that was developed and operated for the UK Environment Agency in response to the microbiological monitoring needs within the revised European Union Bathing Waters Directive. The AlgaRisk approach used satellite Earth observation to provide a near-real time monitoring of microbiological water quality and a series of nested operational models (atmospheric and hydrodynamic-ecosystem) provided a forecast capability. For the period of the demonstration service (2008-2013) all monitoring and forecast datasets were processed in near-real time on a daily basis and disseminated through a dedicated web portal, with extracted data automatically emailed to agency staff. Near-real time data processing was achieved using a series of supercomputers and an Open Grid approach. The novel web portal and java-based viewer enabled users to visualise and interrogate current and historical data. The system description, the algorithms employed and example results focussing on a case study of an incidence of the harmful algal bloom Karenia mikimotoi are presented. Recommendations and the potential exploitation of web services for future water quality monitoring services are discussed.

  7. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  8. Shock circle model for ejector performance evaluation

    International Nuclear Information System (INIS)

    Zhu, Yinhai; Cai, Wenjian; Wen, Changyun; Li, Yanzhong

    2007-01-01

    In this paper, a novel shock circle model for the prediction of ejector performance at the critical mode operation is proposed. By introducing the 'shock circle' at the entrance of the constant area chamber, a 2D exponential expression for velocity distribution is adopted to approximate the viscosity flow near the ejector inner wall. The advantage of the 'shock circle' analysis is that the calculation of ejector performance is independent of the flows in the constant area chamber and diffuser. Consequently, the calculation is even simpler than many 1D modeling methods and can predict the performance of critical mode operation ejectors much more accurately. The effectiveness of the method is validated by two experimental results reported earlier. The proposed modeling method using two coefficients is shown to produce entrainment ratio, efficiency and coefficient of performance (COP) accurately and much closer to experimental results than those of 1D analysis methods

  9. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  10. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  11. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  12. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  13. The catastrophic decline of the Sumatran rhino (Dicerorhinus sumatrensis harrissoni in Sabah: Historic exploitation, reduced female reproductive performance and population viability

    Directory of Open Access Journals (Sweden)

    P. Kretzschmar

    2016-04-01

    Full Text Available The reasons for catastrophic declines of Sumatran rhinos are far from clear and data necessary to improve decisions for conservation management are often lacking. We reviewed literature and assembled a comprehensive data set on surveys of the Sumatran rhino subspecies (Dicerorhinus sumatrensis harrissoni in the Malaysian state of Sabah on Borneo to chart the historical development of the population in Sabah and its exploitation until the present day. We fitted resource selection functions to identify habitat features preferred by a remnant population of rhinos living in the Tabin Wildlife Reserve in Sabah, and ran a series of population viability analyses (PVAs to extract the key demographic parameters most likely to affect population dynamics. We show that as preferred habitat, the individuals in the reserve were most likely encountered in elevated areas away from roads, in close distance to mud-volcanoes, with a low presence of human trespassers and a wallow on site, and within a neighbourhood of dense forest and grassland patches preferably on Fluvisols and Acrisols. Our population viability analyses identified the percentage of breeding females and female lifetime reproductive period as the crucial parameters driving population dynamics, in combination with total protection even moderate improvements could elevate population viability substantially. The analysis also indicates that unrestrained hunting between 1930 and 1950 drastically reduced the historical rhino population in Sabah and that the remnant population could be rescued by combining the effort of total protection and stimulation of breeding activity. Based on our results, we recommend to translocate isolated reproductively healthy individuals to protected locations and to undertake measures to maximise conceptions, or running state-of-the-art reproductive management with assisted reproduction techniques. Our study demonstrates that a judicious combination of techniques can do

  14. Nanostructured Basaltfiberconcrete Exploitational Characteristics

    Science.gov (United States)

    Saraykina, K. A.; Shamanov, V. A.

    2017-11-01

    The article demonstrates that the mass use of basalt fiber concrete (BFC) is constrained by insufficient study of their durability and serviceability in a variety of environments. This research is aimed at the study of the basalt fiber corrosion processes in the cement stone of BFC, the control of the new products structure formation in order to protect the reinforcing fiber from alkaline destruction and thereby improve the exploitational characteristics of the composite. The research result revealed that the modification of basaltfiber concrete by the dispersion of MWNTs contributes to the directional formation of new products in the cement matrix. The HAM additive in basaltfiberconcrete provides for the binding of portlandite to low-basic calcium hydroaluminosilicates, thus reducing the aggressive effect of the cement environment on the reinforcing fibers properties. The complex modification of BFC with nanostructured additives provides for an increase in its durability and exploitational properties (strength, frost resistance and water resistance) due to basalt fiber protection from alkali corrosion on account of the compacting of the contact zone “basalt fiber - cement stone” and designing of the new products structure and morphology of cement matrix over the fiber surface.

  15. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  16. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  17. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  18. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  19. Exploitative Learning by Exporting

    DEFF Research Database (Denmark)

    Golovko, Elena; Lopes Bento, Cindy; Sofka, Wolfgang

    Decisions on entering foreign markets are among the most challenging but also potentially rewarding strategy choices managers can make. In this study, we examine the effect of export entry on the firm investment decisions in two activities associated with learning about new technologies...... and learning about new markets ? R&D investments and marketing investments, in search of novel insights into the content and process underlying learning by exporting. We draw from organizational learning theory for predicting changes in both R&D and marketing investment patterns that accompany firm entry......, it is predominantly the marketing-related investment decisions associated with starting to export that lead to increases in firm productivity. We conclude that learning-by-exporting might be more properly characterized as ?learning about and exploiting new markets? rather than ?learning about new technologies...

  20. Wave and Wind Model Performance Metrics Tools

    Science.gov (United States)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  1. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  2. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  3. Mental model differences between external designers and their clients : The influence on project exploration, project exploitation and project performance

    NARCIS (Netherlands)

    Tabeau, K.E.; Gemser, G.; Wijnberg, N.; Hultink, H.J.

    2013-01-01

    Prior research indicated that external designers are often hired by their clients to bring new knowledge into their organization. To assure that an external designer’s knowledge is implemented in the organization of the client, managing the relationship between the two is essential. Although it

  4. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  5. A Procurement Performance Model for Construction Frameworks

    Directory of Open Access Journals (Sweden)

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  6. Models for Automated Tube Performance Calculations

    International Nuclear Information System (INIS)

    Brunkhorst, C.

    2002-01-01

    High power radio-frequency systems, as typically used in fusion research devices, utilize vacuum tubes. Evaluation of vacuum tube performance involves data taken from tube operating curves. The acquisition of data from such graphical sources is a tedious process. A simple modeling method is presented that will provide values of tube currents for a given set of element voltages. These models may be used as subroutines in iterative solutions of amplifier operating conditions for a specific loading impedance

  7. Learning Metasploit exploitation and development

    CERN Document Server

    Balapure, Aditya

    2013-01-01

    A practical, hands-on tutorial with step-by-step instructions. The book will follow a smooth and easy-to-follow tutorial approach, covering the essentials and then showing the readers how to write more sophisticated exploits.This book targets exploit developers, vulnerability analysts and researchers, network administrators, and ethical hackers looking to gain advanced knowledge in exploitation development and identifying vulnerabilities. The primary goal is to take readers wishing to get into more advanced exploitation discovery and reaching the next level.Prior experience exploiting basic st

  8. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  9. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  10. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  11. Performance comparison of 850-nm and 1550-nm VCSELs exploiting OOK, OFDM, and 4-PAM over SMF/MMF links for low-cost optical interconnects

    DEFF Research Database (Denmark)

    Karinou, Fotini; Deng, Lei; Rodes Lopez, Roberto

    2013-01-01

    -shift keying (QPSK)/16-ary quadrature amplitude modulation (16QAM) with direct detection, over SMF (100m and 5km) and MMF (100m and 1km) short-range links, for their potential application in low-cost rack-to-rack optical interconnects. Moreover, we assess the performance of quaternary-pulse amplitude...

  12. Data Model Performance in Data Warehousing

    Science.gov (United States)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  13. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  14. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  15. Assessing The Performance of Hydrological Models

    Science.gov (United States)

    van der Knijff, Johan

    The performance of hydrological models is often characterized using the coefficient of efficiency, E. The sensitivity of E to extreme streamflow values, and the difficulty of deciding what value of E should be used as a threshold to identify 'good' models or model parameterizations, have proven to be serious shortcomings of this index. This paper reviews some alternative performance indices that have appeared in the litera- ture. Legates and McCabe (1999) suggested a more generalized form of E, E'(j,B). Here, j is a parameter that controls how much emphasis is put on extreme streamflow values, and B defines a benchmark or 'null hypothesis' against which the results of the model are tested. E'(j,B) was used to evaluate a large number of parameterizations of a conceptual rainfall-runoff model, using 6 different combinations of j and B. First, the effect of j and B is explained. Second, it is demonstrated how the index can be used to explicitly test hypotheses about the model and the data. This approach appears to be particularly attractive if the index is used as a likelihood measure within a GLUE-type analysis.

  16. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    Science.gov (United States)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes

  17. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  18. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  19. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    Science.gov (United States)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A

  20. Determination of performance characteristics of robotic manipulator's permanent magnet synchronous motor by learning its FEM model

    International Nuclear Information System (INIS)

    Bharadvaj, Bimmi; Saini, Surendra Singh; Swaroop, Teja Tumapala; Sarkar, Ushnish; Ray, Debashish Datta

    2016-01-01

    Permanent Magnet Synchronous Motors (PMSM) are widely used as actuators because of high torque density, high efficiency and reliability. Robotic Manipulator designed for specific task generally requires actuators with very high intermittent torque and speed for their operation in limited space. Hence accurate performance characteristics of PMSM must be known beforehand under these conditions as it may damage the motor. Therefore an advanced mathematical model of PMSM is required for its control synthesis and performance analysis over wide operating range. The existing mathematical models are developed considering ideal motor without including the geometrical deviations that occur during manufacturing process of the motor or its components. These manufacturing tolerance affect torque ripple, operating current range etc. thereby affecting motor performance. In this work, the magnetically non-linear dynamic model is further exploited to refine the FE model using a proposed algorithm to iteratively compensate for the experimentally observed deviations due to manufacturing. (author)

  1. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  2. Implementation of a model-independent search for new physics with the CMS detector exploiting the world-wide LHC Computing Grid

    CERN Document Server

    Hof, Carsten

    With this year's start of CERN's Large Hadron Collider (LHC) it will be possible for the first time to directly probe the physics at the TeV-scale at a collider experiment. At this scale the Standard Model of particle physics will reach its limits and new physical phenomena are expected to appear. This study performed with one of the LHC's experiments, namely the Compact Muon Solenoid (CMS), is trying to quantify the understanding of the Standard Model and is hunting for deviations from the expectation by investigating a large fraction of the CMS data. While the classical approach for searches of physics beyond the Standard Model assumes a specific theoretical model and tries to isolate events with a certain signature characteristic for the new theory, this thesis follows a model-independent approach. The method relies only on the knowledge of the Standard Model and is suitable to spot deviations from this model induced by particular theoretical models but also theories not yet thought of. Future data are to ...

  3. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  4. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  5. Exploiting Performance of Different Low-Cost Sensors for Small Amplitude Oscillatory Motion Monitoring: Preliminary Comparisons in View of Possible Integration

    Directory of Open Access Journals (Sweden)

    Elisa Benedetti

    2016-01-01

    Full Text Available We address the problem of low amplitude oscillatory motion detection through different low-cost sensors: a LIS3LV02DQ MEMS accelerometer, a Microsoft Kinect v2 range camera, and a uBlox 6 GPS receiver. Several tests were performed using a one-direction vibrating table with different oscillation frequencies (in the range 1.5–3 Hz and small challenging amplitudes (0.02 m and 0.03 m. A Mikrotron EoSens high-resolution camera was used to give reference data. A dedicated software tool was developed to retrieve Kinect v2 results. The capabilities of the VADASE algorithm were employed to process uBlox 6 GPS receiver observations. In the investigated time interval (in the order of tens of seconds the results obtained indicate that displacements were detected with the resolution of fractions of millimeters with MEMS accelerometer and Kinect v2 and few millimeters with uBlox 6. MEMS accelerometer displays the lowest noise but a significant bias, whereas Kinect v2 and uBlox 6 appear more stable. The results suggest the possibility of sensor integration both for indoor (MEMS accelerometer + Kinect v2 and for outdoor (MEMS accelerometer + uBlox 6 applications and seem promising for structural monitoring applications.

  6. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  7. High-performance flexible inverted organic light-emitting diodes by exploiting MoS2 nanopillar arrays as electron-injecting and light-coupling layers.

    Science.gov (United States)

    Guo, Kunping; Si, Changfeng; Han, Ceng; Pan, Saihu; Chen, Guo; Zheng, Yanqiong; Zhu, Wenqing; Zhang, Jianhua; Sun, Chang; Wei, Bin

    2017-10-05

    Inverted organic light-emitting diodes (IOLEDs) on plastic substrates have great potential application in flexible active-matrix displays. High energy consumption, instability and poor electron injection are key issues limiting the commercialization of flexible IOLEDs. Here, we have systematically investigated the electrooptical properties of molybdenum disulfide (MoS 2 ) and applied it in developing highly efficient and stable blue fluorescent IOLEDs. We have demonstrated that MoS 2 -based IOLEDs can significantly improve electron-injecting capacity. For the MoS 2 -based device on plastic substrates, we have achieved a very high external quantum efficiency of 7.3% at the luminance of 9141 cd m -2 , which is the highest among the flexible blue fluorescent IOLEDs reported. Also, an approximately 1.8-fold improvement in power efficiency was obtained compared to glass-based IOLEDs. We attributed the enhanced performance of flexible IOLEDs to MoS 2 nanopillar arrays due to their light extraction effect. The van der Waals force played an important role in the formation of MoS 2 nanopillar arrays by thermal evaporation. Notably, MoS 2 -based flexible IOLEDs exhibit an intriguing efficiency roll-up, that is, the current efficiency increases slightly from 14.0 to 14.6 cd A -1 with the luminance increasing from 100 to 5000 cd m -2 . In addition, we observed that the initial brightness of 500 cd m -2 can be maintained at 97% after bending for 500 cycles, demonstrating the excellent mechanical stability of flexible IOLEDs. Furthermore, we have successfully fabricated a transparent, flexible IOLED with low efficiency roll-off at high current density.

  8. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  9. Modelling fuel cell performance using artificial intelligence

    Science.gov (United States)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  10. Modelling fuel cell performance using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Ogaji, S.O.T.; Singh, R.; Pilidis, P.; Diacakis, M. [Power Propulsion and Aerospace Engineering Department, Centre for Diagnostics and Life Cycle Costs, Cranfield University (United Kingdom)

    2006-03-09

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed. (author)

  11. Consensual exploitation : the moral wrong in exploitation and legal restrictions on consensual exploitative transactions

    OpenAIRE

    van der Neut, Wendy

    2014-01-01

    This thesis is about so-­‐called consensual exploitative transactions: transactions to which all parties agree voluntarily, and which are beneficial for all parties, but which are still widely considered exploitative, and for that reason legally restricted in many countries. The thesis asks two main questions: 1. What is wrong with consensual exploitation? 2.What implications does the answer to this question have for the legal restriction of consensual transactions ...

  12. M-Commerce Exploitation

    DEFF Research Database (Denmark)

    Ulhøi, John Parm; Jørgensen, Frances

    2008-01-01

    into this emerging market may well depend on development of new business models that emphasize the socio-technical intricacies of these networks. The objective of this paper is to examine the development of these networks as a central part of new M-commerce business models in SME's and report on initial findings...

  13. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  14. Oil exploitation and the environmental Kuznets curve

    International Nuclear Information System (INIS)

    Esmaeili, Abdoulkarim; Abdollahzadeh, Negar

    2009-01-01

    This study refers to a panel estimation of an environmental Kuznets curve (EKC) for oil to determine the factors most affecting oil exploitation in 38 oil-producing countries during 1990-2000. Control variables such as oil reserves, oil price, population, political rights, and the Gini index were used to determine its contribution to the main EKC model. The empirical results fully support the existence of an EKC for oil exploitation. Furthermore, the result indicates that the proved oil reserves has a significant and positive role in oil production, but oil price and population do not significantly affect crude oil production. Also, increased freedoms and a better income distribution will reduce the rate of oil exploitation. Thus, policies aiming at enhancing democratic society and better income distribution would be more compatible with sustainability. (author)

  15. Oil exploitation and the environmental Kuznets curve

    Energy Technology Data Exchange (ETDEWEB)

    Esmaeili, Abdoulkarim; Abdollahzadeh, Negar [Department of Agricultural Economics, College of Agriculture, Shiraz University, Shiraz, Fars (Iran)

    2009-01-15

    This study refers to a panel estimation of an environmental Kuznets curve (EKC) for oil to determine the factors most affecting oil exploitation in 38 oil-producing countries during 1990-2000. Control variables such as oil reserves, oil price, population, political rights, and the Gini index were used to determine its contribution to the main EKC model. The empirical results fully support the existence of an EKC for oil exploitation. Furthermore, the result indicates that the proved oil reserves has a significant and positive role in oil production, but oil price and population do not significantly affect crude oil production. Also, increased freedoms and a better income distribution will reduce the rate of oil exploitation. Thus, policies aiming at enhancing democratic society and better income distribution would be more compatible with sustainability. (author)

  16. CASTOR detector. Model, objectives and simulated performance

    International Nuclear Information System (INIS)

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D.; Aslanoglou, X.; Nicolis, N.; Lobanov, M.; Erine, S.; Kharlov, Y. V.; Bogolyubsky, M. Y.; Kurepin, A. B.; Chileev, K.; Wlodarczyk, Z.

    2001-01-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented

  17. Fishery of the Uçá Crab Ucides Cordatus (Linnaeus, 1763 in a Mangrove Area in Cananéia, State of São Paulo, Brazil: Fishery Performance, Exploitation Patterns and Factors Affecting the Catches

    Directory of Open Access Journals (Sweden)

    Luis Felipe de Almeida Duarte

    2014-09-01

    Full Text Available The fishery of the mangrove crab (Ucides cordatus is one of the oldest sources of food, income and extractive activity in the estuarine systems of Brazil. The state of São Paulo has the largest population of any Brazilian state, and the city of Cananéia, in the Brazilian southeast has the highest recorded level of exploitation of the uçá-crab. Since 1990, this species has been under intense exploitation pressure due to the unauthorized use of a type of trap called 'redinha'. This type of fishing gear is considered harmful and is prohibited by Brazilian law, although its use is very common throughout the country. This study aims to evaluate the exploitation patterns of U. cordatus based on landing data and monitoring of the crab fishermen to verify the population structure of the crab stock and to identify the factors that influence the catches. A general view of the sustainability of the fishery for this resource is also provided for five defined mangrove sectors (areas A to E at Cananéia. For this purpose, fishery data were recorded during 2009-2010 by the Instituto de Pesca (APTA/SAA-SP, and monitoring of the capture procedures used by two fishermen was conducted to obtain biometry data (CW, carapace width and gender data for the captured crabs. The redinha trap was very efficient (86.4% and produced sustainable catches because the trapped crabs were legal-sized males (CW>60 mm, although some traps are lost or remain in the mangrove swamps and can cause pollution by introducing plastic debris. The fishery data were evaluated with a General Linear Model (GLM based on six factors: the characteristics of the crab fishermen, the time of capture (by month and year, the lunar phase, the productive sector and the reproductive period. The individual crab fishermen's empirical knowledge, the year of capture and the productive sector were the strongest influences on the crab catch per unit effort (CPUE. Differing extraction patterns were found in

  18. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  19. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  20. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  1. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  2. Modelling saline intrusion for repository performance assessment

    International Nuclear Information System (INIS)

    Jackson, C.P.

    1989-04-01

    UK Nirex Ltd are currently considering the possibility of disposal of radioactive waste by burial in deep underground repositories. The natural pathway for radionuclides from such a repository to return to Man's immediate environment (the biosphere) is via groundwater. Thus analyses of the groundwater flow in the neighbourhood of a possible repository, and consequent radionuclide transport form an important part of a performance assessment for a repository. Some of the areas in the UK that might be considered as possible locations for a repository are near the coast. If a repository is located in a coastal region seawater may intrude into the groundwater flow system. As seawater is denser than fresh water buoyancy forces acting on the intruding saline water may have significant effects on the groundwater flow system, and consequently on the time for radionuclides to return to the biosphere. Further, the chemistry of the repository near-field may be strongly influenced by the salinity of the groundwater. It is therefore important for Nirex to have a capability for reliably modelling saline intrusion to an appropriate degree of accuracy in order to make performance assessments for a repository in a coastal region. This report describes work undertaken in the Nirex Research programme to provide such a capability. (author)

  3. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  4. Toward a dynamic perspective on explorative and exploitative innovation activities: a longitudinal study of innovation in the wind blade industry

    NARCIS (Netherlands)

    de Visser, Matthias

    2010-01-01

    Innovation requires a combination of explorative and exploitative innovation activities. Previous studies have provided valuable insights in the antecedents of investing in explorative and exploitative activities, the structural governance of exploration and exploitation and the performance

  5. Toward a dynamic perspective on explorative and exploitative innovation activities: A longitudinal study of innovation in the wind blade industry

    NARCIS (Netherlands)

    de Visser, Matthias; Faems, D.L.M.

    2010-01-01

    Innovation requires a combination of explorative and exploitative innovation activities. Previous studies have provided valuable insights in the antecedents of investing in explorative and exploitative activities, the structural governance of exploration and exploitation and the performance

  6. Transnational gestational surrogacy: does it have to be exploitative?

    Science.gov (United States)

    Kirby, Jeffrey

    2014-01-01

    This article explores the controversial practice of transnational gestational surrogacy and poses a provocative question: Does it have to be exploitative? Various existing models of exploitation are considered and a novel exploitation-evaluation heuristic is introduced to assist in the analysis of the potentially exploitative dimensions/elements of complex health-related practices. On the basis of application of the heuristic, I conclude that transnational gestational surrogacy, as currently practiced in low-income country settings (such as rural, western India), is exploitative of surrogate women. Arising out of consideration of the heuristic's exploitation conditions, a set of public education and enabled choice, enhanced protections, and empowerment reforms to transnational gestational surrogacy practice is proposed that, if incorporated into a national regulatory framework and actualized within a low income country, could possibly render such practice nonexploitative.

  7. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  8. Exploiting opportunities at all cost? Entrepreneurial intent and externalities

    NARCIS (Netherlands)

    Urbig, D.; Weitzel, G.U.; Rosenkranz, S.; van Witteloostuijn, A.

    2011-01-01

    they exploit welfare-enhancing opportunities as it is assumed in several normative models? Do we need to prevent potential entrepreneurs from being destructive or are there intrinsic limits to harm others? We experimentally investigate how people with different entrepreneurial intent exploit risky

  9. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  10. Baking oven improvement by performance modelling

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-01

    The first phase of the project included both the derivation of an oven model and the development of a portable, rapid-response heat-flux sensor. Heat flux (defined as the instantaneous rate of heat flow per unit at the surface of the baking biscuit and expressed in W/cm[sup 2]) has been shown to be a more useful measure of oven performance than temperature alone. Fixed-point heat-flux sensors have already been developed and marketed, but a need was expressed at the start of this project for a travelling sensor which could be used to construct a more detailed picture of heat-flux variation in an oven. The travelling monitor developed can be used to measure variations in the heat flux experienced at the surface of products being baked in a travelling oven, both when oven conditions are fixed and when they are varied. It can also be used to identify the optimum locations within an oven for fixed heat-flux probes. It has been used effectively throughout the project for both purposes. Fuel savings of 18% and 21%, respectively, were achieved with two ovens. (author)

  11. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    Science.gov (United States)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  12. Exploiting Human Resource Requirements to Infer Human Movement Patterns for Use in Modelling Disease Transmission Systems: An Example from Eastern Province, Zambia.

    Directory of Open Access Journals (Sweden)

    Simon Alderton

    Full Text Available In this research, an agent-based model (ABM was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval. This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.

  13. Exploiting Human Resource Requirements to Infer Human Movement Patterns for Use in Modelling Disease Transmission Systems: An Example from Eastern Province, Zambia.

    Science.gov (United States)

    Alderton, Simon; Noble, Jason; Schaten, Kathrin; Welburn, Susan C; Atkinson, Peter M

    2015-01-01

    In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.

  14. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Science.gov (United States)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  15. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  16. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  17. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  18. The exploration-exploitation dilemma: a multidisciplinary framework.

    Directory of Open Access Journals (Sweden)

    Oded Berger-Tal

    Full Text Available The trade-off between the need to obtain new knowledge and the need to use that knowledge to improve performance is one of the most basic trade-offs in nature, and optimal performance usually requires some balance between exploratory and exploitative behaviors. Researchers in many disciplines have been searching for the optimal solution to this dilemma. Here we present a novel model in which the exploration strategy itself is dynamic and varies with time in order to optimize a definite goal, such as the acquisition of energy, money, or prestige. Our model produced four very distinct phases: Knowledge establishment, Knowledge accumulation, Knowledge maintenance, and Knowledge exploitation, giving rise to a multidisciplinary framework that applies equally to humans, animals, and organizations. The framework can be used to explain a multitude of phenomena in various disciplines, such as the movement of animals in novel landscapes, the most efficient resource allocation for a start-up company, or the effects of old age on knowledge acquisition in humans.

  19. Models and criteria for waste repository performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1981-03-01

    A primary objective of the Waste Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in assuring that this objective is met. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. Criteria development needs and the relation between criteria and models are also discussed

  20. Sustaining Team Performance: A Systems Model\\

    Science.gov (United States)

    1979-07-31

    member performance of specific behaviors" ( Nivea et al., 1978, p. 59). They have identified four major performance categories, and several performance...within the fire direction center several artillerymen work additively. The number of men in the fire direction center does not add steps to the sequence...Instructional strategies for training men of high and low aptitude. HumRRO-TR-73-10. Alexandria, VA: Human Resources Organization, April 1973. Blum, M.L. and

  1. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  2. Uncertainty avoidance and the exploration-exploitation trade-off

    NARCIS (Netherlands)

    Broekhuizen, Thijs; Giarratana, Marco S.; Torres, Anna

    2017-01-01

    Purpose - This study aims to investigate how a firm's uncertainty avoidance - as indicated by the headquarters' national culture - impacts firm performance by affecting exploratory (product innovation) and exploitative (brand trademark protection) activities. It aims to show that firms characterized

  3. Electric properties of organic and mineral electronic components, design and modelling of a photovoltaic chain for a better exploitation of the solar energy; Proprietes electriques des composants electroniques mineraux et organiques, conception et modelisation d'une chaine photovoltaique pour une meilleure exploitation de l'energie solaire

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, A

    2006-11-15

    The research carried out in this thesis relates to the mineral, organic electronic components and the photovoltaic systems. Concerning the mineral semiconductors, we modelled the conduction properties of the structures metal/oxide/semiconductor (MOS) strongly integrated in absence and in the presence of charges. We proposed a methodology allowing characterizing the ageing of structures MOS under injection of the Fowler Nordheim (FN) current type. Then, we studied the Schottky diodes in polymers of type metal/polymer/metal. We concluded that: The mechanism of the charges transfer, through the interface metal/polymer, is allotted to the thermo-ionic effect and could be affected by the lowering of the potential barrier to the interface metal/polymer. In the area of photovoltaic energy, we conceived and modelled a photovoltaic system of average power (100 W). We showed that the adaptation of the generator to the load allows a better exploitation of solar energy. This is carried out by the means of the converters controlled by an of type MPPT control provided with a detection circuit of dysfunction and restarting of the system. (author)

  4. Electric properties of organic and mineral electronic components, design and modelling of a photovoltaic chain for a better exploitation of the solar energy; Proprietes electriques des composants electroniques mineraux et organiques, conception et modelisation d'une chaine photovoltaique pour une meilleure exploitation de l'energie solaire

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, A

    2006-11-15

    The research carried out in this thesis relates to the mineral, organic electronic components and the photovoltaic systems. Concerning the mineral semiconductors, we modelled the conduction properties of the structures metal/oxide/semiconductor (MOS) strongly integrated in absence and in the presence of charges. We proposed a methodology allowing characterizing the ageing of structures MOS under injection of the Fowler Nordheim (FN) current type. Then, we studied the Schottky diodes in polymers of type metal/polymer/metal. We concluded that: The mechanism of the charges transfer, through the interface metal/polymer, is allotted to the thermo-ionic effect and could be affected by the lowering of the potential barrier to the interface metal/polymer. In the area of photovoltaic energy, we conceived and modelled a photovoltaic system of average power (100 W). We showed that the adaptation of the generator to the load allows a better exploitation of solar energy. This is carried out by the means of the converters controlled by an of type MPPT control provided with a detection circuit of dysfunction and restarting of the system. (author)

  5. Sustainable exploitation and management of aquatic resources

    DEFF Research Database (Denmark)

    Neuenfeldt, Stefan; Köster, Fritz

    2014-01-01

    DTU Aqua conducts research, provides advice,educates at university level and contributes toinnovation in sustainable exploitation andmanagement of aquatic resources. The vision of DTUAqua is to enable ecologically and economicallysustainable exploitation of aquatic resourcesapplying an integrated...... management. Marineecosystems aims at understanding the mechanisms that govern the interaction between individuals,species and populations in an ecosystem enabling us to determine the stability and flexibility of theecosystem.Marine living resources looks at the sustainable utilization of fish and shellfish...... stocks.Ecosystem effects expands from the ecosystem approach to fisheries management to an integratedapproach where other human activities are taken into consideration. Fisheries management developsmethods, models and tools for predicting and evaluating the effects of management measures andregulations...

  6. Exploiting HRM in support of lean manufacturing

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Matthiesen, Rikke

    The purpose of this paper is to investigate the ways in HRM practices are-and could potentially be-exploited to support lean manufacturing in practice. First, a review of the pertinent literature regarding HRM, SHRM, and lean manufacturing is presented to provide an understanding of the mechanisms...... by which HRM practices could, theoretically, be used to support a lean implementation. Data presented in the paper are derived from 1) a longitudinal case study on lean implementation and 2) from managers currently involved with lean manufacturing in a second company. The relevant literature and the data...... depicting the potential role in supporting HRM/lean integrated practices. The analysis of the model with respect to the theoretical background emphasizes a number of areas in which HRM could be more fully exploited in order to more successfully support lean implementation, for example, by stressing HRM...

  7. Exploiting first-class arrays in Fortran for accelerator programming

    International Nuclear Information System (INIS)

    Rasmussen, Craig E.; Weseloh, Wayne N.; Robey, Robert W.; Sottile, Matthew J.; Quinlan, Daniel; Overbey, Jeffrey

    2010-01-01

    Emerging architectures for high performance computing often are well suited to a data parallel programming model. This paper presents a simple programming methodology based on existing languages and compiler tools that allows programmers to take advantage of these systems. We will work with the array features of Fortran 90 to show how this infrequently exploited, standardized language feature is easily transformed to lower level accelerator code. Our transformations are based on a mapping from Fortran 90 to C++ code with OpenCL extensions. The sheer complexity of programming for clusters of many or multi-core processors with tens of millions threads of execution make the simplicity of the data parallel model attractive. Furthermore, the increasing complexity of todays applications (especially when convolved with the increasing complexity of the hardware) and the need for portability across hardware architectures make a higher-level and simpler programming model like data parallel attractive. The goal of this work has been to exploit source-to-source transformations that allow programmers to develop and maintain programs at a high-level of abstraction, without coding to a specific hardware architecture. Furthermore these transformations allow multiple hardware architectures to be targeted without changing the high-level source. It also removes the necessity for application programmers to understand details of the accelerator architecture or to know OpenCL.

  8. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-12-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste mangement decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  9. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-01-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  10. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  11. Extending the Global Sensitivity Analysis of the SimSphere model in the Context of its Future Exploitation by the Scientific Community

    Directory of Open Access Journals (Sweden)

    George P. Petropoulos

    2015-05-01

    Full Text Available In today’s changing climate, the development of robust, accurate and globally applicable models is imperative for a wider understanding of Earth’s terrestrial biosphere. Moreover, an understanding of the representation, sensitivity and coherence of such models are vital for the operationalisation of any physically based model. A Global Sensitivity Analysis (GSA was conducted on the SimSphere land biosphere model in which a meta-modelling method adopting Bayesian theory was implemented. Initially, effects of assuming uniform probability distribution functions (PDFs for the model inputs, when examining sensitivity of key quantities simulated by SimSphere at different output times, were examined. The development of topographic model input parameters (e.g., slope, aspect, and elevation were derived within a Geographic Information System (GIS before implementation within the model. The effect of time of the simulation on the sensitivity of previously examined outputs was also analysed. Results showed that simulated outputs were significantly influenced by changes in topographic input parameters, fractional vegetation cover, vegetation height and surface moisture availability in agreement with previous studies. Time of model output simulation had a significant influence on the absolute values of the output variance decomposition, but it did not seem to change the relative importance of each input parameter. Sensitivity Analysis (SA results of the newly modelled outputs allowed identification of the most responsive model inputs and interactions. Our study presents an important step forward in SimSphere verification given the increasing interest in its use both as an independent modelling and educational tool. Furthermore, this study is very timely given on-going efforts towards the development of operational products based on the synergy of SimSphere with Earth Observation (EO data. In this context, results also provide additional support for the

  12. Modelling Flexible Pavement Response and Performance

    DEFF Research Database (Denmark)

    Ullidtz, Per

    This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods.......This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods....

  13. Detailed Performance Model for Photovoltaic Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tian, H.; Mancilla-David, F.; Ellis, K.; Muljadi, E.; Jenkins, P.

    2012-07-01

    This paper presents a modified current-voltage relationship for the single diode model. The single-diode model has been derived from the well-known equivalent circuit for a single photovoltaic cell. The modification presented in this paper accounts for both parallel and series connections in an array.

  14. Modelling Flat Spring performance using FEA

    International Nuclear Information System (INIS)

    Fatola, B O; Keogh, P; Hicks, B

    2009-01-01

    This paper reports how the stiffness of a Flat Spring can be predicted using nonlinear Finite Element Analysis (FEA). The analysis of a Flat Spring is a nonlinear problem involving contact mechanics, geometric nonlinearity and material property nonlinearity. Research has been focused on improving the accuracy of the model by identifying and exploring the significant assumptions contributing to errors. This paper presents results from some of the models developed using FEA software. The validation process is shown to identify where improvements can be made to the model assumptions to increase the accuracy of prediction. The goal is to achieve an accuracy level of ±10 % as the intention is to replace practical testing with FEA modelling, thereby reducing the product development time and cost. Results from the FEA models are compared with experimental results to validate the accuracy.

  15. Simulated population responses of common carp to commercial exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Michael J.; Hennen, Matthew J.; Brown, Michael L.

    2011-12-01

    Common carp Cyprinus carpio is a widespread invasive species that can become highly abundant and impose deleterious ecosystem effects. Thus, aquatic resource managers are interested in controlling common carp populations. Control of invasive common carp populations is difficult, due in part to the inherent uncertainty of how populations respond to exploitation. To understand how common carp populations respond to exploitation, we evaluated common carp population dynamics (recruitment, growth, and mortality) in three natural lakes in eastern South Dakota. Common carp exhibited similar population dynamics across these three systems that were characterized by consistent recruitment (ages 3 to 15 years present), fast growth (K = 0.37 to 0.59), and low mortality (A = 1 to 7%). We then modeled the effects of commercial exploitation on size structure, abundance, and egg production to determine its utility as a management tool to control populations. All three populations responded similarly to exploitation simulations with a 575-mm length restriction, representing commercial gear selectivity. Simulated common carp size structure modestly declined (9 to 37%) in all simulations. Abundance of common carp declined dramatically (28 to 56%) at low levels of exploitation (0 to 20%) but exploitation >40% had little additive effect and populations were only reduced by 49 to 79% despite high exploitation (>90%). Maximum lifetime egg production was reduced from 77 to 89% at a moderate level of exploitation (40%), indicating the potential for recruitment overfishing. Exploitation further reduced common carp size structure, abundance, and egg production when simulations were not size selective. Our results provide insights to how common carp populations may respond to exploitation. Although commercial exploitation may be able to partially control populations, an integrated removal approach that removes all sizes of common carp has a greater chance of controlling population abundance

  16. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  17. Integrated thermodynamic model for ignition target performance

    Directory of Open Access Journals (Sweden)

    Springer P.T.

    2013-11-01

    Full Text Available We have derived a 3-dimensional synthetic model for NIF implosion conditions, by predicting and optimizing fits to a broad set of x-ray and nuclear diagnostics obtained on each shot. By matching x-ray images, burn width, neutron time-of-flight ion temperature, yield, and fuel ρr, we obtain nearly unique constraints on conditions in the hotspot and fuel in a model that is entirely consistent with the observables. This model allows us to determine hotspot density, pressure, areal density (ρr, total energy, and other ignition-relevant parameters not available from any single diagnostic. This article describes the model and its application to National Ignition Facility (NIF tritium–hydrogen–deuterium (THD and DT implosion data, and provides an explanation for the large yield and ρr degradation compared to numerical code predictions.

  18. The CREATIVE Decontamination Performance Evaluation Model

    National Research Council Canada - National Science Library

    Shelly, Erin E

    2008-01-01

    The project objective is to develop a semi-empirical, deterministic model to characterize and predict laboratory-scale decontaminant efficacy and hazards for a range of: chemical agents (current focus on HD...

  19. Mathematical Modeling of Circadian/Performance Countermeasures

    Data.gov (United States)

    National Aeronautics and Space Administration — We developed and refined our current mathematical model of circadian rhythms to incorporate melatonin as a marker rhythm. We used an existing physiologically based...

  20. Security option file - Exploitation (DOS-Expl)

    International Nuclear Information System (INIS)

    2016-01-01

    This document aims at presenting functions performed by Cigeo during its exploitation phase, its main technical and security options which are envisaged with respect to different types of internal or external risks, and a first assessment of its impact on mankind and on the environment during its exploitation in normal operation as well as in incidental or accidental situations. A first volume addresses security principles, approach and management in relationship with the legal and regulatory framework. The second volume presents input data related to waste parcels and used for the installation sizing and operation, the main site characteristics, the main technical options regarding structures and equipment, and the main options regarding exploitation (parcel management, organisational and human aspects, and effluent management). The third volume describes how parcel are processed from their arrival to their setting in storage compartment, an inventory of internal and external risks, and a first assessment of consequences of scenarios on mankind and on the environment. The fourth volume presents options and operations which are envisaged regarding Cigeo closure, and inventory of associated risks

  1. Modelling of Supercapacitors: Factors Influencing Performance

    OpenAIRE

    Kroupa, M; Offer, GJ; Kosek, J

    2016-01-01

    The utilizable capacitance of Electrochemical Double Layer Capacitors (EDLCs) is a function of the frequency at which they are operated and this is strongly dependent on the construction and physical parameters of the device. We simulate the dynamic behavior of an EDLC using a spatially resolved model based on the porous electrode theory. The model of Verbrugge and Liu (J. Electrochem. Soc. 152, D79 (2005)) was extended with a dimension describing the transport into the carbon particle pores....

  2. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  3. Exploiting core knowledge for visual object recognition.

    Science.gov (United States)

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  5. Modeling vibrato and portamento in music performance

    NARCIS (Netherlands)

    Desain, P.W.M.; Honing, H.J.

    1999-01-01

    Research in the psychology of music dealing with expression is often concerned with the discrete aspects of music performance, and mainly concentrates on the study of piano music (partly because of the ease with which piano music can be reduced to discrete note events). However, on other

  6. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  7. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  8. Evaluating Performances of Traffic Noise Models | Oyedepo ...

    African Journals Online (AJOL)

    Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...

  9. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    and Corporate Information Systems: A Proactive Mitigation. Response Model. 1 ... known malware variants, and more than ... has defined authentication as the process of identifying ... providing protection via Access Controls,. Encryption and ... to use their technical prowess to teach .... Developing and distributing approved.

  10. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  11. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    Software is better modelled with the use of UML diagrams like use cases which ... analysis and design according to [1] is to transform the 'black .... The system request the user to enter his/her user_id .... The corresponding changes are saved.

  12. Mathematical modeling of optical glazing performance

    NARCIS (Netherlands)

    Nijnatten, van P.A.; Wittwer, V.; Granqvist, C.G.; Lampert, C.M.

    1994-01-01

    Mathematical modelling can be a powerful tool in the design and optimalization of glazing. By calculation, the specifications of a glazing design and the optimal design parameters can be predicted without building costly prototypes first. Furthermore, properties which are difficult to measure, like

  13. Some useful characteristics of performance models

    International Nuclear Information System (INIS)

    Worledge, D.H.

    1985-01-01

    This paper examines the demands placed upon models of human cognitive decision processes in application to Probabilistic Risk Assessment. Successful models, for this purpose, should, 1) be based on proven or plausible psychological knowledge, e.g., Rasmussen's mental schematic, 2) incorporate opportunities for slips, 3) take account of the recursive nature, in time, of corrections to mistaken actions, and 4) depend on the crew's predominant mental states that accompany such recursions. The latter is equivalent to an explicit coupling between input and output of Rasmussen's mental schematic. A family of such models is proposed with observable rate processes mediating the (conscious) mental states involved. It is expected that the cumulative probability distributions corresponding to the individual rate processes can be identified with probability-time correlations of the HCR Human Cognitive Reliability type discussed elsewhere in this session. The functional forms of the conditional rates are intuitively shown to have simple characteristics that lead to a strongly recursive stochastic process with significant predictive capability. Models of the type proposed have few parts and form a representation that is intentionally far short of a fully transparent exposition of the mental process in order to avoid making impossible demands on data

  14. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  15. Meaning, function and methods of the recultivation in mining exploitation

    OpenAIRE

    Dambov, Risto; Ljatifi, Ejup

    2015-01-01

    With the exploitation of mineral resources is performed degradation and deformation of the relief and the general part of surface of the Earth's crust. Depending on the type of open pit mine, this degradation can be expressed to a lesser or greater extent, and sometimes in several square kilometers. The exploitation of mineral resources is with unbreakable link with the environment. Very often it is said that mining is „enemy No. 1“ for environment. With exploitation comes to degradation of h...

  16. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  17. Accurate Modeling and Analysis of Isolation Performance in Multiport Amplifiers

    Directory of Open Access Journals (Sweden)

    Marinella Aloisio

    2012-01-01

    Full Text Available A Multiport Amplifier (MPA is an implementation of the satellite power amplification section that allows sharing the payload RF power among several beams/ports and guarantees a highly efficient exploitation of the available DC satellite power. This feature is of paramount importance in multiple beam satellite systems where the use of MPAs allows reconfiguring the RF output power among the different service beams in order to handle unexpected traffic unbalances and traffic variations over time. This paper presents Monte Carlo simulations carried out by means of an ESA in-house simulator developed in Matlab environment. The objective of the simulations is to analyse how the MPA performance, in particular in terms of isolation at the MPA output ports, is affected by the amplitude and phase tracking errors of the high power amplifiers within the MPA.

  18. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    Interactions and Transitions 53 New Discoveries , Inventions, or Patent Disclosures 56 FA9550-06-1-0055 Individualized Biomathematical Modeling of Fatigue...Old Dominion University, not supported on grant) Daniel J. Mollicone, Ph.D. ( Pulsar Informatics, Inc., not supported on grant) Christopher G...Mott, M.S. ( Pulsar Informatics, Inc., not supported on grant) Erik Olofsen, M.S. (Leiden University, the Netherlands, not supported on grant

  19. Exploiting Redundancy in an OFDM SDR Receiver

    Directory of Open Access Journals (Sweden)

    Tomas Palenik

    2009-01-01

    Full Text Available Common OFDM system contains redundancy necessary to mitigate interblock interference and allows computationally effective single-tap frequency domain equalization in receiver. Assuming the system implements an outer error correcting code and channel state information is available in the receiver, we show that it is possible to understand the cyclic prefix insertion as a weak inner ECC encoding and exploit the introduced redundancy to slightly improve error performance of such a system. In this paper, an easy way to implement modification to an existing SDR OFDM receiver is presented. This modification enables the utilization of prefix redundancy, while preserving full compatibility with existing OFDM-based communication standards.

  20. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  1. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focus groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.

  2. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  3. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  4. Modeling and analysis to quantify MSE wall behavior and performance.

    Science.gov (United States)

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  5. The ESA Geohazard Exploitation Platform

    Science.gov (United States)

    Bally, Philippe; Laur, Henri; Mathieu, Pierre-Philippe; Pinto, Salvatore

    2015-04-01

    Earthquakes represent one of the world's most significant hazards in terms both of loss of life and damages. In the first decade of the 21st century, earthquakes accounted for 60 percent of fatalities from natural disasters, according to the United Nations International Strategy for Disaster Reduction (UNISDR). To support mitigation activities designed to assess and reduce risks and improve response in emergency situations, satellite EO can be used to provide a broad range of geo-information services. This includes for instance crustal block boundary mapping to better characterize active faults, strain rate mapping to assess how rapidly faults are deforming, soil vulnerability mapping to help estimate how the soil is behaving in reaction to seismic phenomena, geo-information to assess the extent and intensity of the earthquake impact on man-made structures and formulate assumptions on the evolution of the seismic sequence, i.e. where local aftershocks or future main shocks (on nearby faults) are most likely to occur. In May 2012, the European Space Agency and the GEO Secretariat convened the International Forum on Satellite EO for Geohazards now known as the Santorini Conference. The event was the continuation of a series of international workshops such as those organized by the Geohazards Theme of the Integrated Global Observing Strategy Partnership. In Santorini the seismic community has set out a vision of the EO contribution to an operational global seismic risk program, which lead to the Geohazard Supersites and Natural Laboratories (GSNL) initiative. The initial contribution of ESA to suuport the GSNL was the first Supersites Exploitation Platform (SSEP) system in the framework of Grid Processing On Demand (GPOD), now followed by the Geohazard Exploitation Platform (GEP). In this presentation, we will describe the contribution of the GEP for exploiting satellite EO for geohazard risk assessment. It is supporting the GEO Supersites and has been further

  6. Characterization uncertainty and its effects on models and performance

    International Nuclear Information System (INIS)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization

  7. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  8. New model performance index for engineering design of control systems

    Science.gov (United States)

    1970-01-01

    Performance index includes a model representing linear control-system design specifications. Based on a geometric criterion for approximation of the model by the actual system, the index can be interpreted directly in terms of the desired system response model without actually having the model's time response.

  9. Top-Down Delivery of IoT-based Applications for Seniors Behavior Change Capturing Exploiting a Model-Driven Approach

    OpenAIRE

    Fiore, Alessandro; Caione, Adriana; Mainetti, Luca; Manco, Luigi; Vergallo, Roberto

    2018-01-01

    Developing Internet of Things (IoT) requires expertise and considerable skills in different fields in order to cover all the involved heterogeneous technologies, communication formats and protocols. Developers and experts ask for new solutions that speed up the prototyping of IoT applications. One of these solutions is Web of Topics (WoX) middleware, a model-driven Cloud platform that aims to ease IoT applications developing, introducing a strong semantic abstraction of the IoT concepts. In W...

  10. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  11. Analytic Ballistic Performance Model of Whipple Shields

    Science.gov (United States)

    Miller, J. E.; Bjorkman, M. D.; Christiansen, E. L.; Ryan, S. J.

    2015-01-01

    The dual-wall, Whipple shield is the shield of choice for lightweight, long-duration flight. The shield uses an initial sacrificial wall to initiate fragmentation and melt an impacting threat that expands over a void before hitting a subsequent shield wall of a critical component. The key parameters to this type of shield are the rear wall and its mass which stops the debris, as well as the minimum shock wave strength generated by the threat particle impact of the sacrificial wall and the amount of room that is available for expansion. Ensuring the shock wave strength is sufficiently high to achieve large scale fragmentation/melt of the threat particle enables the expansion of the threat and reduces the momentum flux of the debris on the rear wall. Three key factors in the shock wave strength achieved are the thickness of the sacrificial wall relative to the characteristic dimension of the impacting particle, the density and material cohesion contrast of the sacrificial wall relative to the threat particle and the impact speed. The mass of the rear wall and the sacrificial wall are desirable to minimize for launch costs making it important to have an understanding of the effects of density contrast and impact speed. An analytic model is developed here, to describe the influence of these three key factors. In addition this paper develops a description of a fourth key parameter related to fragmentation and its role in establishing the onset of projectile expansion.

  12. Systemic model for the aid for operating of the reactor Siloe; Modelisation systeme pour l`aide a l`exploitation du reacteur de recherche Siloe

    Energy Technology Data Exchange (ETDEWEB)

    Royer, J.C.; Moulin, V.; Monge, F. [CEA Centre d`Etudes de Grenoble, 38 (France). Direction des Reacteurs Nucleaires; Baradel, C. [ITMI APTOR, 38 - Meylan (France)

    1995-12-31

    The Service of the Reactor Siloe (CEA/DRN/DRE/SRS), fully aware of the abilities and knowledge of his teams in the field of research reactor operating, has undertaken a project of knowledge engineering in this domain. The following aims have been defined: knowledge capitalization for the installation in order to insure its perenniality and valorization, elaboration of a project for the aid of the reactor operators. This article deals with the different actions by the SRS to reach the aims: realization of a technical model for the operation of the Siloe reactor, development of a knowledge-based system for the aid for operating. These actions based on a knowledge engineering methodology, SAGACE, and using industrial tools will lead to an amelioration of the security and the operating of the Siloe reactor. (authors). 13 refs., 7 figs.

  13. Prospects of geothermal resource exploitation

    International Nuclear Information System (INIS)

    Bourrelier, P.H.; Cornet, F.; Fouillac, C.

    1994-01-01

    The use of geothermal energy to generate electricity has only occurred during the past 50 years by drilling wells in aquifers close to magmas and producing either dry steam or hot water. The world's production of electricity from geothermal energy is over 6000 MWe and is still growing. The direct use of geothermal energy for major urban communities has been developed recently by exploitation of aquifers in sedimentary basins under large towns. Scaling up the extraction of heat implies the exploitation of larger and better located fields requiring an appropriate method of extraction; the objective of present attempts in USA, Japan and Europe is to create heat exchangers by the circulation of water between several deep wells. Two field categories are considered: the extension of classical geothermal fields beyond the aquifer areas, and areas favoured by both a high geothermal gradient, fractures inducing a natural permeability at large scale, and good commercial prospects (such as in the Rhenan Graben). Hot dry rocks concept has gained a large interest. 1 fig., 5 tabs., 11 refs

  14. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  15. A new measure of interpersonal exploitativeness

    Directory of Open Access Journals (Sweden)

    Amy B. Brunell

    2013-05-01

    Full Text Available Measures of exploitativeness evidence problems with validity and reliability. The present set of studies assessed a new measure (the Interpersonal Exploitativeness Scale that defines exploitativeness in terms of reciprocity. In Studies 1 and 2, 33 items were administered to participants. Exploratory and Confirmatory Factor Analysis demonstrated that a single factor consisting of six items adequately assess interpersonal exploitativeness. Study 3 results revealed that the Interpersonal Exploitativeness Scale was positively associated with normal narcissism, pathological narcissism, psychological entitlement, and negative reciprocity and negatively correlated with positive reciprocity. In Study 4, participants competed in a commons dilemma. Those who scored higher on the Interpersonal Exploitativeness Scale were more likely to harvest a greater share of resources over time, even while controlling for other relevant variables, such as entitlement. Together, these studies show the Interpersonal Exploitativeness Scale to be a valid and reliable measure of interpersonal exploitativeness. The authors discuss the implications of these studies.

  16. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  17. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  19. Rationalising predictors of child sexual exploitation and sex-trading.

    Science.gov (United States)

    Klatt, Thimna; Cavner, Della; Egan, Vincent

    2014-02-01

    Although there is evidence for specific risk factors leading to child sexual exploitation and prostitution, these influences overlap and have rarely been examined concurrently. The present study examined case files for 175 young persons who attended a voluntary organization in Leicester, United Kingdom, which supports people who are sexually exploited or at risk of sexual exploitation. Based on the case files, the presence or absence of known risk factors for becoming a sex worker was coded. Data were analyzed using t-test, logistic regression, and smallest space analysis. Users of the voluntary organization's services who had been sexually exploited exhibited a significantly greater number of risk factors than service users who had not been victims of sexual exploitation. The logistic regression produced a significant model fit. However, of the 14 potential predictors--many of which were associated with each other--only four variables significantly predicted actual sexual exploitation: running away, poverty, drug and/or alcohol use, and having friends or family members in prostitution. Surprisingly, running away was found to significantly decrease the odds of becoming involved in sexual exploitation. Smallest space analysis of the data revealed 5 clusters of risk factors. Two of the clusters, which reflected a desperation and need construct and immature or out-of-control lifestyles, were significantly associated with sexual exploitation. Our research suggests that some risk factors (e.g. physical and emotional abuse, early delinquency, and homelessness) for becoming involved in sexual exploitation are common but are part of the problematic milieu of the individuals affected and not directly associated with sex trading itself. Our results also indicate that it is important to engage with the families and associates of young persons at risk of becoming (or remaining) a sex worker if one wants to reduce the numbers of persons who engage in this activity. Copyright

  20. SMARTS: Exploiting Temporal Locality and Parallelism through Vertical Execution

    International Nuclear Information System (INIS)

    Beckman, P.; Crotinger, J.; Karmesin, S.; Malony, A.; Oldehoeft, R.; Shende, S.; Smith, S.; Vajracharya, S.

    1999-01-01

    In the solution of large-scale numerical prob- lems, parallel computing is becoming simultaneously more important and more difficult. The complex organization of today's multiprocessors with several memory hierarchies has forced the scientific programmer to make a choice between simple but unscalable code and scalable but extremely com- plex code that does not port to other architectures. This paper describes how the SMARTS runtime system and the POOMA C++ class library for high-performance scientific computing work together to exploit data parallelism in scientific applications while hiding the details of manag- ing parallelism and data locality from the user. We present innovative algorithms, based on the macro -dataflow model, for detecting data parallelism and efficiently executing data- parallel statements on shared-memory multiprocessors. We also desclibe how these algorithms can be implemented on clusters of SMPS

  1. SMARTS: Exploiting Temporal Locality and Parallelism through Vertical Execution

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Crotinger, J.; Karmesin, S.; Malony, A.; Oldehoeft, R.; Shende, S.; Smith, S.; Vajracharya, S.

    1999-01-04

    In the solution of large-scale numerical prob- lems, parallel computing is becoming simultaneously more important and more difficult. The complex organization of today's multiprocessors with several memory hierarchies has forced the scientific programmer to make a choice between simple but unscalable code and scalable but extremely com- plex code that does not port to other architectures. This paper describes how the SMARTS runtime system and the POOMA C++ class library for high-performance scientific computing work together to exploit data parallelism in scientific applications while hiding the details of manag- ing parallelism and data locality from the user. We present innovative algorithms, based on the macro -dataflow model, for detecting data parallelism and efficiently executing data- parallel statements on shared-memory multiprocessors. We also desclibe how these algorithms can be implemented on clusters of SMPS.

  2. Exploitation in International Paid Surrogacy Arrangements

    OpenAIRE

    Wilkinson, Stephen

    2015-01-01

    Abstract Many critics have suggested that international paid surrogacy is exploitative. Taking such concerns as its starting point, this article asks: (1) how defensible is the claim that international paid surrogacy is exploitative and what could be done to make it less exploitative? (2) In the light of the answer to (1), how strong is the case for prohibiting it? Exploitation could in principle be dealt with by improving surrogates' pay and conditions. However, doing so may exacerbate probl...

  3. Asymptotic performance modelling of DCF protocol with prioritized channel access

    Science.gov (United States)

    Choi, Woo-Yong

    2017-11-01

    Recently, the modification of the DCF (Distributed Coordination Function) protocol by the prioritized channel access was proposed to resolve the problem that the DCF performance worsens exponentially as more nodes exist in IEEE 802.11 wireless LANs. In this paper, an asymptotic analytical performance model is presented to analyze the MAC performance of the DCF protocol with the prioritized channel access.

  4. Switching performance of OBS network model under prefetched real traffic

    Science.gov (United States)

    Huang, Zhenhua; Xu, Du; Lei, Wen

    2005-11-01

    Optical Burst Switching (OBS) [1] is now widely considered as an efficient switching technique in building the next generation optical Internet .So it's very important to precisely evaluate the performance of the OBS network model. The performance of the OBS network model is variable in different condition, but the most important thing is that how it works under real traffic load. In the traditional simulation models, uniform traffics are usually generated by simulation software to imitate the data source of the edge node in the OBS network model, and through which the performance of the OBS network is evaluated. Unfortunately, without being simulated by real traffic, the traditional simulation models have several problems and their results are doubtable. To deal with this problem, we present a new simulation model for analysis and performance evaluation of the OBS network, which uses prefetched IP traffic to be data source of the OBS network model. The prefetched IP traffic can be considered as real IP source of the OBS edge node and the OBS network model has the same clock rate with a real OBS system. So it's easy to conclude that this model is closer to the real OBS system than the traditional ones. The simulation results also indicate that this model is more accurate to evaluate the performance of the OBS network system and the results of this model are closer to the actual situation.

  5. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  6. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  7. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  8. The exploitation argument against commercial surrogacy.

    Science.gov (United States)

    Wilkinson, Stephen

    2003-04-01

    This paper discusses the exploitation argument against commercial surrogacy: the claim that commercial surrogacy is morally objectionable because it is exploitative. The following questions are addressed. First, what exactly does the exploitation argument amount to? Second, is commercial surrogacy in fact exploitative? Third, if it were exploitative, would this provide a sufficient reason to prohibit (or otherwise legislatively discourage) it? The focus throughout is on the exploitation of paid surrogates, although it is noted that other parties (e.g. 'commissioning parents') may also be the victims of exploitation. It is argued that there are good reasons for believing that commercial surrogacy is often exploitative. However, even if we accept this, the exploitation argument for prohibiting (or otherwise legislatively discouraging) commercial surrogacy remains quite weak. One reason for this is that prohibition may well 'backfire' and lead to potential surrogates having to do other things that are more exploitative and/or more harmful than paid surrogacy. It is concluded therefore that those who oppose exploitation should (rather than attempting to stop particular practices like commercial surrogacy) concentrate on: (a) improving the conditions under which paid surrogates 'work'; and (b) changing the background conditions (in particular, the unequal distribution of power and wealth) which generate exploitative relationships.

  9. Indonesian Private University Lecturer Performance Improvement Model to Improve a Sustainable Organization Performance

    Science.gov (United States)

    Suryaman

    2018-01-01

    Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…

  10. Exploiting for medical and biological applications

    Science.gov (United States)

    Giano, Michael C.

    Biotherapeutics are an emerging class of drug composed of molecules ranging in sizes from peptides to large proteins. Due to their poor stability and mucosal membrane permeability, biotherapeutics are administered by a parenteral method (i.e., syringe, intravenous or intramuscular). Therapeutics delivered systemically often experience short half-lives. While, local administration may involve invasive surgical procedures and suffer from poor retention at the site of application. To compensate, the patient receives frequent doses of highly concentrated therapeutic. Unfortunately, the off-target side effects and discomfort associated with multiple injections results in poor patient compliance. Therefore, new delivery methods which can improve therapeutic retention, reduce the frequency of administration and may aid in decreasing the off-target side effects is a necessity. Hydrogels are a class of biomaterials that are gaining interests for tissue engineering and drug delivery applications. Hydrogel materials are defined as porous, 3-dimensional networks that are primarily composed of water. Generally, they are mechanically rigid, cytocompatible and easily chemically functionalized. Collectively, these properties make hydrogels fantastic candidates to perform as drug delivery depots. Current hydrogel delivery systems physically entrap the target therapeutic which is then subsequently released over time at the site of administration. The swelling and degradation of the material effect the diffusion of the therapy from the hydrogel, and therefore should be controlled. Although these strategies provide some regulation over therapeutic release, full control of the delivery is not achieved. Newer approaches are focused on designing hydrogels that exploit known interactions, covalently attach the therapy or respond to an external stimulus in an effort to gain improved control over the therapy's release. Unfortunately, the biotherapeutic is typically required to be chemically

  11. Protocol to Exploit Waiting Resources for UASNs

    Directory of Open Access Journals (Sweden)

    Li-Ling Hung

    2016-03-01

    Full Text Available The transmission speed of acoustic waves in water is much slower than that of radio waves in terrestrial wireless sensor networks. Thus, the propagation delay in underwater acoustic sensor networks (UASN is much greater. Longer propagation delay leads to complicated communication and collision problems. To solve collision problems, some studies have proposed waiting mechanisms; however, long waiting mechanisms result in low bandwidth utilization. To improve throughput, this study proposes a slotted medium access control protocol to enhance bandwidth utilization in UASNs. The proposed mechanism increases communication by exploiting temporal and spatial resources that are typically idle in order to protect communication against interference. By reducing wait time, network performance and energy consumption can be improved. A performance evaluation demonstrates that when the data packets are large or sensor deployment is dense, the energy consumption of proposed protocol is less than that of existing protocols as well as the throughput is higher than that of existing protocols.

  12. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  13. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  14. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  15. FARMLAND: Model description and evaluation of model performance

    Energy Technology Data Exchange (ETDEWEB)

    Attwood, C; Fayers, C; Mayall, A; Brown, J; Simmonds, J R [National Radiological Protection Board, Chilton (United Kingdom)

    1996-09-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs.

  16. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  17. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  18. Modelling of Box Type Solar Cooker Performance in a Tropical ...

    African Journals Online (AJOL)

    Thermal performance model of box type solar cooker with loaded water is presented. The model was developed using the method of Funk to estimate cooking power in terms of climatic and design parameters for box type solar cooker in a tropical environment. Coefficients for each term used in the model were determined ...

  19. The Exploitation of Evolving Resources

    CERN Document Server

    McGlade, Jacqueline; Law, Richard

    1993-01-01

    The impact of man on the biosphere is profound. Quite apart from our capacity to destroy natural ecosystems and to drive species to extinction, we mould the evolution of the survivors by the selection pressures we apply to them. This has implications for the continued health of our natural biological resources and for the way in which we seek to optimise yield from those resources. Of these biological resources, fish stocks are particularly important to mankind as a source of protein. On a global basis, fish stocks provide the major source of protein for human consumption from natural ecosystems, amounting to some seventy million tonnes in 1970. Although fisheries management has been extensively developed over the last century, it has not hitherto considered the evolutionary consequences of fishing activity. While this omission may not have been serious in the past, the ever increasing intensity of exploitation and the deteriorating health of fish stocks has generated an urgent need for a better understanding...

  20. Swedish mines. Underground exploitation methods

    International Nuclear Information System (INIS)

    Paucard, A.

    1960-01-01

    Between 1949 and 1957, 10 engineers of the Mining research and exploitation department of the CEA visited 17 Swedish mines during 5 field trips. This paper presents a compilation of the information gathered during these field trips concerning the different underground mining techniques used in Swedish iron mines: mining with backfilling (Central Sweden and Boliden mines); mining without backfilling (mines of the polar circle area). The following techniques are described successively: pillar drawing and backfilled slices (Ammeberg, Falun, Garpenberg, Boliden group), sub-level pillar drawing (Grangesberg, Bloettberget, Haeksberg), empty room and sub-level pillar drawing (Bodas, Haksberg, Stripa, Bastkarn), storage chamber pillar drawing (Bodas, Haeksberg, Bastkarn), and pillar drawing by block caving (ldkerberget). Reprint of a paper published in Revue de l'Industrie Minerale, vol. 41, no. 12, 1959 [fr

  1. Exploiting social evolution in biofilms

    DEFF Research Database (Denmark)

    Boyle, Kerry E; Heilmann, Silja; van Ditmarsch, Dave

    2013-01-01

    Bacteria are highly social organisms that communicate via signaling molecules, move collectively over surfaces and make biofilm communities. Nonetheless, our main line of defense against pathogenic bacteria consists of antibiotics-drugs that target individual-level traits of bacterial cells...... and thus, regrettably, select for resistance against their own action. A possible solution lies in targeting the mechanisms by which bacteria interact with each other within biofilms. The emerging field of microbial social evolution combines molecular microbiology with evolutionary theory to dissect...... the molecular mechanisms and the evolutionary pressures underpinning bacterial sociality. This exciting new research can ultimately lead to new therapies against biofilm infections that exploit evolutionary cheating or the trade-off between biofilm formation and dispersal....

  2. Energy for lunar resource exploitation

    Science.gov (United States)

    Glaser, Peter E.

    1992-02-01

    Humanity stands at the threshold of exploiting the known lunar resources that have opened up with the access to space. America's role in the future exploitation of space, and specifically of lunar resources, may well determine the level of achievement in technology development and global economic competition. Space activities during the coming decades will significantly influence the events on Earth. The 'shifting of history's tectonic plates' is a process that will be hastened by the increasingly insistent demands for higher living standards of the exponentially growing global population. Key to the achievement of a peaceful world in the 21st century, will be the development of a mix of energy resources at a societally acceptable and affordable cost within a realistic planning horizon. This must be the theme for the globally applicable energy sources that are compatible with the Earth's ecology. It is in this context that lunar resources development should be a primary goal for science missions to the Moon, and for establishing an expanding human presence. The economic viability and commercial business potential of mining, extracting, manufacturing, and transporting lunar resource based materials to Earth, Earth orbits, and to undertake macroengineering projects on the Moon remains to be demonstrated. These extensive activities will be supportive of the realization of the potential of space energy sources for use on Earth. These may include generating electricity for use on Earth based on beaming power from Earth orbits and from the Moon to the Earth, and for the production of helium 3 as a fuel for advanced fusion reactors.

  3. Modelling Client Satisfaction Levels: The Impact of Contractor Performance

    Directory of Open Access Journals (Sweden)

    Robby Soetanto

    2012-11-01

    Full Text Available The performance of contractors is known to be a key determinant of client satisfaction.Here, using factor analysis, clients’ satisfaction is defined in several dimensions. Based onclients’ assessment of contractor performance, a number of satisfaction models developedusing the multiple regression (MR technique are presented. The models identify arange of variables encompassing contractor performance, project performance and respondent(i.e. client attributes as useful predictors of satisfaction levels. Contractor performanceattributes were found to be of utmost importance indicating that clientsatisfaction levels are mainly dependent on the performance of the contractor. Furthermore,findings suggest that subjectivity is to some extent prevalent in clients’ performanceassessment. The models demonstrate accurate and reliable predictive power as confirmedby validation tests. Contractors could use the models to help improve their performanceleading to more satisfied clients. This would also promote the development ofharmonious working relationships within the construction project coalition.

  4. Uncovering Indicators of Commercial Sexual Exploitation.

    Science.gov (United States)

    Bounds, Dawn; Delaney, Kathleen R; Julion, Wrenetha; Breitenstein, Susan

    2017-07-01

    It is estimated that annually 100,000 to 300,000 youth are at risk for sex trafficking; a commercial sex act induced by force, fraud, or coercion, or any such act where the person induced to perform such an act is younger than 18 years of age. Increasingly, such transactions are occurring online via Internet-based sites that serve the commercial sex industry. Commercial sex transactions involving trafficking are illegal; thus, Internet discussions between those involved must be veiled. Even so, transactions around sex trafficking do occur. Within these transactions are innuendos that provide one avenue for detecting potential activity. The purpose of this study is to identify linguistic indicators of potential commercial sexual exploitation within the online comments of men posted on an Internet site. Six hundred sixty-six posts from five Midwest cities and 363 unique members were analyzed via content analysis. Three main indicators were found: the presence of youth or desire for youthfulness, presence of pimps, and awareness of vulnerability. These findings begin a much-needed dialogue on uncovering online risks of commercial sexual exploitation and support the need for further research on Internet indicators of sex trafficking.

  5. Performance modeling of neighbor discovery in proactive routing protocols

    Directory of Open Access Journals (Sweden)

    Andres Medina

    2011-07-01

    Full Text Available It is well known that neighbor discovery is a critical component of proactive routing protocols in wireless ad hoc networks. However there is no formal study on the performance of proposed neighbor discovery mechanisms. This paper provides a detailed model of key performance metrics of neighbor discovery algorithms, such as node degree and the distribution of the distance to symmetric neighbors. The model accounts for the dynamics of neighbor discovery as well as node density, mobility, radio and interference. The paper demonstrates a method for applying these models to the evaluation of global network metrics. In particular, it describes a model of network connectivity. Validation of the models shows that the degree estimate agrees, within 5% error, with simulations for the considered scenarios. The work presented in this paper serves as a basis for the performance evaluation of remaining performance metrics of routing protocols, vital for large scale deployment of ad hoc networks.

  6. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1992-01-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes. 86 refs., 1 fig., 1 tab

  7. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1991-10-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues are raised by various critics. In this paper I provide some perspective on issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass-action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes

  8. On species preservation and Non-Cooperative Exploiters

    DEFF Research Database (Denmark)

    Kronbak, Lone Grønbæk; Lindroos, Marko

    cases where several non-cooperative exploiters are involved in mixed fisheries. This paper is targeting biodiversity preservation by setting up a two species model with the aim of ensuring both species survive harvesting of exploiters adapting a non-cooperative behaviour. The model starts out as a multi......-species model without biological dependency and is then modified to include also biological dependency. We contribute to the literature by analytically finding the limits on the number of players preserving both species including the conditions to be satisfied. For visual purposes we simulate a two species...

  9. Impact of reactive settler models on simulated WWTP performance

    DEFF Research Database (Denmark)

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  10. Dissipative environment may improve the quantum annealing performances of the ferromagnetic p -spin model

    Science.gov (United States)

    Passarelli, G.; De Filippis, G.; Cataudella, V.; Lucignano, P.

    2018-02-01

    We investigate the quantum annealing of the ferromagnetic p -spin model in a dissipative environment (p =5 and p =7 ). This model, in the large-p limit, codifies Grover's algorithm for searching in an unsorted database [L. K. Grover, Proceedings of the 28th Annual ACM Symposium on Theory of Computing (ACM, New York, 1996), pp. 212-219]. The dissipative environment is described by a phonon bath in thermal equilibrium at finite temperature. The dynamics is studied in the framework of a Lindblad master equation for the reduced density matrix describing only the spins. Exploiting the symmetries of our model Hamiltonian, we can describe many spins and extrapolate expected trends for large N and p . While at weak system-bath coupling the dissipative environment has detrimental effects on the annealing results, we show that in the intermediate-coupling regime, the phonon bath seems to speed up the annealing at low temperatures. This improvement in the performance is likely not due to thermal fluctuation but rather arises from a correlated spin-bath state and persists even at zero temperature. This result may pave the way to a new scenario in which, by appropriately engineering the system-bath coupling, one may optimize quantum annealing performances below either the purely quantum or the classical limit.

  11. Channel modeling and performance evaluation of FSO communication systems in fog

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-01

    Free space optical (FSO) communication has become more exciting during the last decade. It has unregulated spectrum with a huge capacity compared to its radio frequency (RF) counterpart. Although FSO has many applications that cover indoor and outdoor environments, its widespread is humped by weather effects. Fog is classified as an extreme weather impairment that may cause link drop. Foggy channel modeling and characterization is necessary to analyze the system performance. In this paper, we first address the statistical behavior of the foggy channel based on a set of literature experimental data and develop a probability distribution function (PDF) model for fog attenuation. We then exploit our PDF model to derive closed form expressions and evaluate the system performance theoretically and numerically, in terms of average signal-to-noise ratio (SNR), and outage probability. The results show that for 10-3 outage probability and 22 dBm transmitted power, the FSO system can work over 80 m, 160 m, 310 m, and 460 m link length under dense, thick, moderate, and light fog respectively. Increasing the transmitted power will have high impact when the fog density is low. However, under very dense fog, it has almost no effect. © 2016 IEEE.

  12. Channel modeling and performance evaluation of FSO communication systems in fog

    KAUST Repository

    Esmail, Maged Abdullah; Fathallah, Habib; Alouini, Mohamed-Slim

    2016-01-01

    Free space optical (FSO) communication has become more exciting during the last decade. It has unregulated spectrum with a huge capacity compared to its radio frequency (RF) counterpart. Although FSO has many applications that cover indoor and outdoor environments, its widespread is humped by weather effects. Fog is classified as an extreme weather impairment that may cause link drop. Foggy channel modeling and characterization is necessary to analyze the system performance. In this paper, we first address the statistical behavior of the foggy channel based on a set of literature experimental data and develop a probability distribution function (PDF) model for fog attenuation. We then exploit our PDF model to derive closed form expressions and evaluate the system performance theoretically and numerically, in terms of average signal-to-noise ratio (SNR), and outage probability. The results show that for 10-3 outage probability and 22 dBm transmitted power, the FSO system can work over 80 m, 160 m, 310 m, and 460 m link length under dense, thick, moderate, and light fog respectively. Increasing the transmitted power will have high impact when the fog density is low. However, under very dense fog, it has almost no effect. © 2016 IEEE.

  13. Planetary Suit Hip Bearing Model for Predicting Design vs. Performance

    Science.gov (United States)

    Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar

    2011-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective

  14. Photovoltaic Reliability Performance Model v 2.0

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-16

    PV-RPM is intended to address more “real world” situations by coupling a photovoltaic system performance model with a reliability model so that inverters, modules, combiner boxes, etc. can experience failures and be repaired (or left unrepaired). The model can also include other effects, such as module output degradation over time or disruptions such as electrical grid outages. In addition, PV-RPM is a dynamic probabilistic model that can be used to run many realizations (i.e., possible future outcomes) of a system’s performance using probability distributions to represent uncertain parameter inputs.

  15. Data modelling and performance of data base systems

    International Nuclear Information System (INIS)

    Rossiter, B.N.

    1984-01-01

    The three main methods of data modelling, hierarchical, network, and relational are described together with their advantages and disadvantages. The hierarchical model has strictly limited applicability, but the other two are of general use, although the network model in many respects defines a storage structure whilst the relational model defines a logical structure. Because of this, network systems are more difficult to use than relational systems but are easier to tune to obtain efficient performance. More advanced models have been developed to capture more semantic detail, and two of these RM/T and the role model are discussed. (orig.)

  16. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  17. Theoretical performance model for single image depth from defocus.

    Science.gov (United States)

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  18. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-01-01

    In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis

  19. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  20. Individualized Next-Generation Biomathematical Modeling of Fatigue and Performance

    National Research Council Canada - National Science Library

    Van Dongen, Hans P

    2006-01-01

    .... This project employed a cutting-edge technique called Bayesian forecasting to develop a novel biomathematical performance model to predict responses to sleep loss and circadian displacement for individual subjects...

  1. Computational Modeling of Human Multiple-Task Performance

    National Research Council Canada - National Science Library

    Kieras, David E; Meyer, David

    2005-01-01

    This is the final report for a project that was a continuation of an earlier, long-term project on the development and validation of the EPIC cognitive architecture for modeling human cognition and performance...

  2. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  3. Six scenarios of exploiting an ontology based, mobilized learning environment

    NARCIS (Netherlands)

    Kismihók, G.; Szabó, I.; Vas, R.

    2012-01-01

    In this article, six different exploitation possibilities of an educational ontology based, mobilized learning management system are presented. The focal point of this system is the educational ontology model. The first version of this educational ontology model serves as a foundation for curriculum

  4. Rational Exploitation and Utilizing of Groundwater in Jiangsu Coastal Area

    Science.gov (United States)

    Kang, B.; Lin, X.

    2017-12-01

    Jiangsu coastal area is located in the southeast coast of China, where is a new industrial base and an important coastal and Land Resources Development Zone of China. In the areas with strong human exploitation activities, regional groundwater evolution is obviously affected by human activities. In order to solve the environmental geological problems caused by groundwater exploitation fundamentally, we must find out the forming conditions of regional groundwater hydrodynamic field, and the impact of human activities on groundwater hydrodynamic field evolution and hydrogeochemical evolition. Based on these results, scientific management and reasonable exploitation of the regional groundwater resources can be provided for the utilization. Taking the coastal area of Jiangsu as the research area, we investigate and analyze of the regional hydrogeological conditions. The numerical simulation model of groundwater flow was established according to the water power, chemical and isotopic methods, the conditions of water flow and the influence of hydrodynamic field on the water chemical field. We predict the evolution of regional groundwater dynamics under the influence of human activities and climate change and evaluate the influence of groundwater dynamic field evolution on the environmental geological problems caused by groundwater exploitation under various conditions. We get the following conclusions: Three groundwater exploitation optimal schemes were established. The groundwater salinization was taken as the primary control condition. The substitution model was proposed to model groundwater exploitation and water level changes by BP network method.Then genetic algorithm was used to solve the optimization solution. Three groundwater exploitation optimal schemes were submit to local water resource management. The first sheme was used to solve the groundwater salinization problem. The second sheme focused on dual water supply. The third sheme concerned on emergency water

  5. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  6. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  7. Rising to the challenge : A model of contest performance

    OpenAIRE

    DesAutels, Philip; Berthon, Pierre; Salehi-Sangari, Esmail

    2011-01-01

    Contests are a ubiquitous form of promotion widely adopted by financial services advertisers, yet, paradoxically, academic research on them is conspicuous in its absence. This work addresses this gap by developing a model of contest engagement and performance. Using motivation theory, factors that drive participant engagement are modeled, and engagement's effect on experience and marketing success of the contest specified. Measures of contest performance, in-contest engagement and post-contes...

  8. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  9. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  10. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  11. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  12. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  13. Two questions about surrogacy and exploitation.

    Science.gov (United States)

    Wertheimer, Alan

    1992-01-01

    In this article I will consider two related questions about surrogacy and exploitation: (1) Is surrogacy exploitative? (2) If surrogacy is exploitative, what is the moral force of this exploitation? Briefly stated, I shall argue that whether surrogacy is exploitative depends on whether exploitation must be harmful to the exploited party or whether (as I think) there can be mutually advantageous exploitation. It also depends on some facts about surrogacy about which we have little reliable evidence and on our philosophical view on what counts as a harm to the surrogate. Our answer to the second question will turn in part on the account of exploitation we invoke in answering the first question and in part on the way in which we resolve some other questions about the justification of state interference. I shall suggest, however, that if surrogacy is a form of voluntary and mutually advantageous exploitation, then there is a strong presumption that surrogacy contracts should be permitted and even enforceable, although that presumption may be overridden on other grounds.

  14. Team performance modeling for HRA in dynamic situations

    International Nuclear Information System (INIS)

    Shu Yufei; Furuta, Kazuo; Kondo, Shunsuke

    2002-01-01

    This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human-machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model

  15. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  16. A Model for Effective Performance in the Indonesian Navy.

    Science.gov (United States)

    1987-06-01

    NAVY LEADERSHIP AND MANAGEMENT COM PETENCY M ODEL .................................. 15 D. MCBER COMPETENT MANAGERS MODEL ................ IS E. SU M M... leadership and managerial skills which emphasize on effective performance of the officers in managing the human resources under their cormnand and...supervision. By effective performance we mean officers who not only know about management theories , but who possess the characteristics, knowledge, skill, and

  17. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  18. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  19. Enhancing pavement performance prediction models for the Illinois Tollway System

    Directory of Open Access Journals (Sweden)

    Laxmikanth Premkumar

    2016-01-01

    Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management

  20. Confirming the Value of Swimming-Performance Models for Adolescents.

    Science.gov (United States)

    Dormehl, Shilo J; Robertson, Samuel J; Barker, Alan R; Williams, Craig A

    2017-10-01

    To evaluate the efficacy of existing performance models to assess the progression of male and female adolescent swimmers through a quantitative and qualitative mixed-methods approach. Fourteen published models were tested using retrospective data from an independent sample of Dutch junior national-level swimmers from when they were 12-18 y of age (n = 13). The degree of association by Pearson correlations was compared between the calculated differences from the models and quadratic functions derived from the Dutch junior national qualifying times. Swimmers were grouped based on their differences from the models and compared with their swimming histories that were extracted from questionnaires and follow-up interviews. Correlations of the deviations from both the models and quadratic functions derived from the Dutch qualifying times were all significant except for the 100-m breaststroke and butterfly and the 200-m freestyle for females (P motivation appeared to be synonymous with higher-level career performance. This mixed-methods approach helped confirm the validity of the models that were found to be applicable to adolescent swimmers at all levels, allowing coaches to track performance and set goals. The value of the models in being able to account for the expected performance gains during adolescence enables quantification of peripheral factors that could affect performance.

  1. How motivation affects academic performance: a structural equation modelling analysis.

    Science.gov (United States)

    Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G

    2013-03-01

    Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.

  2. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    Science.gov (United States)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  3. Enhancing pavement performance prediction models for the Illinois Tollway System

    OpenAIRE

    Laxmikanth Premkumar; William R. Vavrik

    2016-01-01

    Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway) with over 2000 lane miles of pavement utilizes the condition rating survey (CRS) methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT) are used by th...

  4. Comparative assessment of PV plant performance models considering climate effects

    DEFF Research Database (Denmark)

    Tina, Giuseppe; Ventura, Cristina; Sera, Dezso

    2017-01-01

    . The methodological approach is based on comparative tests of the analyzed models applied to two PV plants installed respectively in north of Denmark (Aalborg) and in the south of Italy (Agrigento). The different ambient, operating and installation conditions allow to understand how these factors impact the precision...... the performance of the studied PV plants with others, the efficiency of the systems has been estimated by both conventional Performance Ratio and Corrected Performance Ratio...

  5. Performing arts medicine: A research model for South Africa

    Directory of Open Access Journals (Sweden)

    Karendra Devroop

    2014-11-01

    Full Text Available Performing Arts Medicine has developed into a highly specialised field over the past three decades. The Performing Arts Medical Association (PAMA has been the leading proponent of this unique and innovative field with ground-breaking research studies, symposia, conferences and journals dedicated specifically to the medical problems of performing artists. Similar to sports medicine, performing arts medicine caters specifically for the medical problems of performing artists including musicians and dancers. In South Africa there is a tremendous lack of knowledge of the field and unlike our international counterparts, we do not have specialised clinical settings that cater for the medical problems of performing artists. There is also a tremendous lack of research on performance-related medical problems of performing artists in South Africa. Accordingly the purpose of this paper is to present an overview of the field of performing arts medicine, highlight some of the significant findings from recent research studies and present a model for conducting research into the field of performing arts medicine. It is hoped that this research model will lead to increased research on the medical problems of performing artists in South Africa.

  6. The Social Responsibility Performance Outcomes Model: Building Socially Responsible Companies through Performance Improvement Outcomes.

    Science.gov (United States)

    Hatcher, Tim

    2000-01-01

    Considers the role of performance improvement professionals and human resources development professionals in helping organizations realize the ethical and financial power of corporate social responsibility. Explains the social responsibility performance outcomes model, which incorporates the concepts of societal needs and outcomes. (LRW)

  7. Technical performance of percutaneous and laminectomy leads analyzed by modeling

    NARCIS (Netherlands)

    Manola, L.; Holsheimer, J.

    2004-01-01

    The objective of this study was to compare the technical performance of laminectomy and percutaneous spinal cord stimulation leads with similar contact spacing by computer modeling. Monopolar and tripolar (guarded cathode) stimulation with both lead types in a low-thoracic spine model was simulated

  8. Modelling the Performance of Product Integrated Photovoltaic (PIPV) Cells Indoors

    NARCIS (Netherlands)

    Apostolou, G.; Verwaal, M.; Reinders, Angelina H.M.E.

    2014-01-01

    In this paper we present a model, which have been developed for the estimation of the PV products’ cells’ performance in an indoor environment. The model computes the efficiency and power production of PV technologies, as a function of distance from natural and artificial light sources. It intents

  9. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  10. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  11. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  12. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  13. Modeling and Performance Analysis of Manufacturing Systems in ...

    African Journals Online (AJOL)

    Modeling and Performance Analysis of Manufacturing Systems in Footwear Industry. ... researcher to experiment with different variables and controls the manufacturing process ... In this study Arena simulation software is employed to model and measure ... for Authors · for Policy Makers · about Open Access · Journal Quality.

  14. Modeling the performance of low concentration photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    Reis, F. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Brito, M.C. [SESUL, Faculdade de Ciencias da Universidade de Lisboa, 1749-016 Lisboa (Portugal); Corregidor, V.; Wemans, J. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Sorasio, G. [WS Energia, Ed. Tecnologia II 47, Taguspark, Oeiras (Portugal); Centro Richerche ISCAT, VS Pellico, 12037, Saluzzo (Italy)

    2010-07-15

    A theoretical model has been developed to describe the response of V-trough systems in terms of module temperature, power output and energy yield using as inputs the atmospheric conditions. The model was adjusted to DoubleSun {sup registered} concentration technology, which integrates dual-axis tracker and conventional mono-crystalline Si modules. The good agreement between model predictions and the results obtained at WS Energia laboratory, Portugal, validated the model. It is shown that DoubleSun {sup registered} technology increases up to 86% the yearly energy yield of conventional modules relative to a fixed flat-plate system. The model was also used to perform a sensitivity analysis, in order to highlight the relevance of the leading working parameters (such as irradiance) in system performance (energy yield and module temperature). Model results show that the operation module temperature is always below the maximum working temperature defined by the module manufacturers. (author)

  15. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  16. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  17. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  18. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    OpenAIRE

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  19. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  20. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  1. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  2. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  3. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  4. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  5. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  6. Modelling and measurement of a moving magnet linear compressor performance

    International Nuclear Information System (INIS)

    Liang, Kun; Stone, Richard; Davies, Gareth; Dadd, Mike; Bailey, Paul

    2014-01-01

    A novel moving magnet linear compressor with clearance seals and flexure bearings has been designed and constructed. It is suitable for a refrigeration system with a compact heat exchanger, such as would be needed for CPU cooling. The performance of the compressor has been experimentally evaluated with nitrogen and a mathematical model has been developed to evaluate the performance of the linear compressor. The results from the compressor model and the measurements have been compared in terms of cylinder pressure, the ‘P–V’ loop, stroke, mass flow rate and shaft power. The cylinder pressure was not measured directly but was derived from the compressor dynamics and the motor magnetic force characteristics. The comparisons indicate that the compressor model is well validated and can be used to study the performance of this type of compressor, to help with design optimization and the identification of key parameters affecting the system transients. The electrical and thermodynamic losses were also investigated, particularly for the design point (stroke of 13 mm and pressure ratio of 3.0), since a full understanding of these can lead to an increase in compressor efficiency. - Highlights: • Model predictions of the performance of a novel moving magnet linear compressor. • Prototype linear compressor performance measurements using nitrogen. • Reconstruction of P–V loops using a model of the dynamics and electromagnetics. • Close agreement between the model and measurements for the P–V loops. • The design point motor efficiency was 74%, with potential improvements identified

  7. Charge-coupled-device X-ray detector performance model

    Science.gov (United States)

    Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.

    1987-01-01

    A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.

  8. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  9. Radiation environmental impact assessment of copper exploitation

    International Nuclear Information System (INIS)

    Fan Guang; Wen Zhijian

    2010-01-01

    The radiation environmental impact of mineral exploitation on the surrounding environment has become a public concern. This paper presents the radiation environmental impact assessment of copper exploitation. Based on the project description and detailed investigations of surrounding environment, systematic radiation environmental impacts have been identified. The environmental impacts are assessed during both construction and operation phase. The environmental protection measures have also been proposed. The related conclusion and measures can play an active role in copper exploitation and environmental protection. (authors)

  10. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  11. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  12. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  13. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  14. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  15. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...

  16. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  17. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  18. Investigation into the performance of different models for predicting stutter.

    Science.gov (United States)

    Bright, Jo-Anne; Curran, James M; Buckleton, John S

    2013-07-01

    In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  20. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  1. Thermal exploitation of shallow aquifers. Guide for the preparation of preliminary studies of technical feasibility

    International Nuclear Information System (INIS)

    Ausseur, J.Y.; Sauty, J.P.

    1982-08-01

    This report presents the main devices aimed at exploiting surface aquifers. After an introduction to the different systems of thermal exploitation of aquifers (generalities, very low energy geothermal, sensitive heat storage, interest of thermal exploitation of aquifers, indication of possible systems), this report presents the different possible systems and analyses their characteristics and performance. These systems are: direct exploitation of groundwater bodies at their natural temperature by heat sink and with release in surface networks or by geothermal dipole, or exploitation with artificial thermal refill. Thus the report addresses the single sink device with or without storage, heat pumps on dipole in surface groundwater bodies or very low temperature geothermal, the scanning dipole system, and the dipole system with hot sink and cold sink. It discusses the choice and sizing of the exploitation system. An appendix reports a feasibility preliminary study of nine cases of thermal exploitation of surface aquifers by double drills

  2. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  3. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  4. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  5. Efficient Depth Map Compression Exploiting Segmented Color Data

    DEFF Research Database (Denmark)

    Milani, Simone; Zanuttigh, Pietro; Zamarin, Marco

    2011-01-01

    performances is still an open research issue. This paper presents a novel compression scheme that exploits a segmentation of the color data to predict the shape of the different surfaces in the depth map. Then each segment is approximated with a parameterized plane. In case the approximation is sufficiently...

  6. Proficient brain for optimal performance: the MAP model perspective.

    Science.gov (United States)

    Bertollo, Maurizio; di Fronso, Selenia; Filho, Edson; Conforto, Silvia; Schmid, Maurizio; Bortoli, Laura; Comani, Silvia; Robazza, Claudio

    2016-01-01

    Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS) activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP) model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1) and optimal-controlled (Type 2) performances. Methods. Ten elite shooters (6 male and 4 female) with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time) repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha) for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the "neural efficiency hypothesis." We also observed more ERD as related to optimal-controlled performance in conditions of "neural adaptability" and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  7. Proficient brain for optimal performance: the MAP model perspective

    Directory of Open Access Journals (Sweden)

    Maurizio Bertollo

    2016-05-01

    Full Text Available Background. The main goal of the present study was to explore theta and alpha event-related desynchronization/synchronization (ERD/ERS activity during shooting performance. We adopted the idiosyncratic framework of the multi-action plan (MAP model to investigate different processing modes underpinning four types of performance. In particular, we were interested in examining the neural activity associated with optimal-automated (Type 1 and optimal-controlled (Type 2 performances. Methods. Ten elite shooters (6 male and 4 female with extensive international experience participated in the study. ERD/ERS analysis was used to investigate cortical dynamics during performance. A 4 × 3 (performance types × time repeated measures analysis of variance was performed to test the differences among the four types of performance during the three seconds preceding the shots for theta, low alpha, and high alpha frequency bands. The dependent variables were the ERD/ERS percentages in each frequency band (i.e., theta, low alpha, high alpha for each electrode site across the scalp. This analysis was conducted on 120 shots for each participant in three different frequency bands and the individual data were then averaged. Results. We found ERS to be mainly associated with optimal-automatic performance, in agreement with the “neural efficiency hypothesis.” We also observed more ERD as related to optimal-controlled performance in conditions of “neural adaptability” and proficient use of cortical resources. Discussion. These findings are congruent with the MAP conceptualization of four performance states, in which unique psychophysiological states underlie distinct performance-related experiences. From an applied point of view, our findings suggest that the MAP model can be used as a framework to develop performance enhancement strategies based on cognitive and neurofeedback techniques.

  8. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  9. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  10. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  11. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people......, formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  12. A Bibliometric Analysis and Review on Performance Modeling Literature

    Directory of Open Access Journals (Sweden)

    Barbara Livieri

    2015-04-01

    Full Text Available In management practice, performance indicators are considered as a prerequisite to make informed decisions in line with the organization’s goals. On the other hand, indicators summarizes compound phenomena in a few digits, which can induce to inadequate decisions, biased by information loss and conflicting values. Model driven approaches in enterprise engineering can be very effective to avoid these pitfalls, or to take it under control. For that reason, “performance modeling” has the numbers to play a primary role in the “model driven enterprise” scenario, together with process, information and other enterprise-related aspects. In this perspective, we propose a systematic review of the literature on performance modeling in order to retrieve, classify, and summarize existing research, identify the core authors and define areas and opportunities for future research.

  13. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  14. Cassini Radar EQM Model: Instrument Description and Performance Status

    Science.gov (United States)

    Borgarelli, L.; Faustini, E. Zampolini; Im, E.; Johnson, W. T. K.

    1996-01-01

    The spaeccraft of the Cassini Mission is planned to be launched towards Saturn in October 1997. The mission is designed to study the physical structure and chemical composition of Titan. The results of the tests performed on the Cassini radar engineering qualification model (EQM) are summarized. The approach followed in the verification and evaluation of the performance of the radio frequency subsystem EQM is presented. The results show that the instrument satisfies the relevant mission requirements.

  15. Modeling Operator Performance in Low Task Load Supervisory Domains

    Science.gov (United States)

    2011-06-01

    important to model the best and 65 worst performers separately. It is easy to see that the best performers were better multitaskers and more directed...the expected population this research will influence is expected to contain men and women between the ages of 18 and 50 with an interest in using...for your patience and great sense of humor. I could not ask for a better thesis reader. Thank you, Amy D’Agostino, for taking the time to read my

  16. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  17. Delay model and performance testing for FPGA carry chain TDC

    International Nuclear Information System (INIS)

    Kang Xiaowen; Liu Yaqiang; Cui Junjian Yang Zhangcan; Jin Yongjie

    2011-01-01

    Time-of-flight (TOF) information would improve the performance of PET (position emission tomography). TDC design is a key technique. It proposed Carry Chain TDC Delay model. Through changing the significant delay parameter of model, paper compared the difference of TDC performance, and finally realized Time-to-Digital Convertor (TDC) based on Carry Chain Method using FPGA EP2C20Q240C8N with 69 ps LSB, max error below 2 LSB. Such result could meet the TOF demand. It also proposed a Coaxial Cable Measuring method for TDC testing, without High-precision test equipment. (authors)

  18. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  19. Packaging of Sin Goods - Commitment or Exploitation?

    DEFF Research Database (Denmark)

    Nafziger, Julia

    to such self-control problems, and possibly exploit them, by offering different package sizes. In a competitive market, either one or three (small, medium and large) packages are offered. In contrast to common intuition, the large, and not the small package is a commitment device. The latter serves to exploit...

  20. SEXUAL EXPLOITATION AND ABUSE BY UN PEACEKEEPERS ...

    African Journals Online (AJOL)

    Allaiac

    sexual exploitation of children by peacekeepers is particularly insidious. ... sexual exploitation and abuse should involve an understanding of the social .... The charges of sexual misconduct, and the consequent media exposure, have ..... awareness programmes such as video tapes, lectures and training manuals, designed.

  1. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-01-01

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  2. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  3. Titan I propulsion system modeling and possible performance improvements

    Science.gov (United States)

    Giusti, Oreste

    This thesis features the Titan I propulsion systems and offers data-supported suggestions for improvements to increase performance. The original propulsion systems were modeled both graphically in CAD and via equations. Due to the limited availability of published information, it was necessary to create a more detailed, secondary set of models. Various engineering equations---pertinent to rocket engine design---were implemented in order to generate the desired extra detail. This study describes how these new models were then imported into the ESI CFD Suite. Various parameters are applied to these imported models as inputs that include, for example, bi-propellant combinations, pressure, temperatures, and mass flow rates. The results were then processed with ESI VIEW, which is visualization software. The output files were analyzed for forces in the nozzle, and various results were generated, including sea level thrust and ISP. Experimental data are provided to compare the original engine configuration models to the derivative suggested improvement models.

  4. Software life cycle dynamic simulation model: The organizational performance submodel

    Science.gov (United States)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  5. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  6. NATO Operational Record: Collective Analytical Exploitation to Inform Operational Analysis Models and Common Operational Planning Factors (Archives operationnelles de l’OTAN: Exploitation analytique collective visant a alimenter les modeles d’analyse operationnelle et les facteurs de planification operationnelle commune)

    Science.gov (United States)

    2014-05-01

    futures de l’OTAN est positivement influencée par l’analyse opérationnelle qui s’appuie sur les données quantitatives et qualitatives des dossiers des...operations is positively influenced by operational analysis that relies on quantitative and qualitative data of operational records from past and...and future NATO operations is positively influenced by operational analysis methods, models, and tools that rely on quantitative and qualitative data

  7. Exploiting wild relatives of S. lycopersicum for quality traits

    NARCIS (Netherlands)

    Víquez Zamora, A.M.

    2015-01-01

    Exploiting wild relatives of S. lycopersicum for quality traits Ana Marcela Víquez Zamora Tomatoes are consumed worldwide and became a model for crop plant research. A part of the research aims at expanding genetic diversity in tomato; this can be done by incorporating

  8. Risk assessment by dynamic representation of vulnerability, exploitation, and impact

    Science.gov (United States)

    Cam, Hasan

    2015-05-01

    Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.

  9. Building Information Modeling (BIM) for Indoor Environmental Performance Analysis

    DEFF Research Database (Denmark)

    The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....

  10. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  11. Decline Curve Based Models for Predicting Natural Gas Well Performance

    OpenAIRE

    Kamari, Arash; Mohammadi, Amir H.; Lee, Moonyong; Mahmood, Tariq; Bahadori, Alireza

    2016-01-01

    The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN) modelling strategy, least square support vector machine (LSSVM) approach, adaptive neuro-fuzzy ...

  12. Final Report, “Exploiting Global View for Resilience”

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Andrew [Univ. of Chicago, IL (United States)

    2017-03-29

    Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.

  13. Metabolic robustness in young roots underpins a predictive model of maize hybrid performance in the field.

    Science.gov (United States)

    de Abreu E Lima, Francisco; Westhues, Matthias; Cuadros-Inostroza, Álvaro; Willmitzer, Lothar; Melchinger, Albrecht E; Nikoloski, Zoran

    2017-04-01

    Heterosis has been extensively exploited for yield gain in maize (Zea mays L.). Here we conducted a comparative metabolomics-based analysis of young roots from in vitro germinating seedlings and from leaves of field-grown plants in a panel of inbred lines from the Dent and Flint heterotic patterns as well as selected F 1 hybrids. We found that metabolite levels in hybrids were more robust than in inbred lines. Using state-of-the-art modeling techniques, the most robust metabolites from roots and leaves explained up to 37 and 44% of the variance in the biomass from plants grown in two distinct field trials. In addition, a correlation-based analysis highlighted the trade-off between defense-related metabolites and hybrid performance. Therefore, our findings demonstrated the potential of metabolic profiles from young maize roots grown under tightly controlled conditions to predict hybrid performance in multiple field trials, thus bridging the greenhouse-field gap. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  14. The integration of intrapreneurship into a performance management model

    Directory of Open Access Journals (Sweden)

    Thabo WL Foba

    2007-02-01

    Full Text Available This study aimed to investigate the feasibility of using the dynamics of intrapreneurship to develop a new generation performance management model based on the structural dynamics of the Balanced Score Card approach. The literature survey covered entrepreneurship, from which the construct, intrapreneurship, was synthesized. Reconstructive logic and Hermeneutic methodology were used in studying the performance management systems and the Balanced Score Card approach. The dynamics were then integrated into a new approach for the management of performance of intrapreneurial employees in the corporate environment. An unstructured opinion survey followed: a sample of intrapreneurship students evaluated and validated the model’s conceptual feasibility and probable practical value.

  15. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  16. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  17. Information source exploitation/exploration and NPD decision-making

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    different Scandinavian companies. Data was analyzed using hierarchical regression models across decision criteria dimensions and NPD stages as well as analyzing the combination of selected information sources. Rather than forwarding one optimal search behavior for the entire NPD process, we find optimal...... information search behavior at either end of the exploitation/exploration continuum. Additionally, we find that overexploitation and overexploration is caused by managerial bias. This creates managerial misbehavior at gate decision-points of the NPD process.......The purpose of this study is to examine how the exploration/exploitation continuum is applied by decision-makers in new product gate decision-making. Specifically, we analyze at gate decision-points how the evaluation of a new product project is affected by the information source exploitation...

  18. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  19. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    Science.gov (United States)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  20. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  1. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  2. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  3. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  4. Influence of horizontal resolution and ensemble size on model performance

    CSIR Research Space (South Africa)

    Dalton, A

    2014-10-01

    Full Text Available Conference of South African Society for Atmospheric Sciences (SASAS), Potchefstroom, 1-2 October 2014 Influence of horizontal resolution and ensemble size on model performance Amaris Dalton*¹, Willem A. Landman ¹ʾ² ¹Departmen of Geography, Geo...

  5. Performance and Cognitive Assessment in 3-D Modeling

    Science.gov (United States)

    Fahrer, Nolan E.; Ernst, Jeremy V.; Branoff, Theodore J.; Clark, Aaron C.

    2011-01-01

    The purpose of this study was to investigate identifiable differences between performance and cognitive assessment scores in a 3-D modeling unit of an engineering drafting course curriculum. The study aimed to provide further investigation of the need of skill-based assessments in engineering/technical graphics courses to potentially increase…

  6. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    CHIDOZIE CHUKWUEMEKA NWOBI-OKOYE

    2017-11-16

    Nov 16, 2017 ... In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of ... Since the .... The investigation hubs are a local brewing company ..... Industrial Engineers, Systems Engineers, Operations ... responsibility the overall management of the new system lies.

  7. INFORMATION SYSTEM FOR MODELING ECONOMIC AND FINANCIAL PERFORMANCES

    Directory of Open Access Journals (Sweden)

    Boldeanu Dana Maria

    2009-05-01

    Full Text Available The analysis of the most important financial and economic indicators at the level of some organizations from the same sector of activity, the selection of performance ratios and generating a particular analysis model help companies to move from the desire

  8. A multilateral modelling of Youth Soccer Performance Index (YSPI)

    Science.gov (United States)

    Bisyri Husin Musawi Maliki, Ahmad; Razali Abdullah, Mohamad; Juahir, Hafizan; Abdullah, Farhana; Ain Shahirah Abdullah, Nurul; Muazu Musa, Rabiu; Musliha Mat-Rasid, Siti; Adnan, Aleesha; Azura Kosni, Norlaila; Muhamad, Wan Siti Amalina Wan; Afiqah Mohamad Nasir, Nur

    2018-04-01

    This study aims to identify the most dominant factors that influencing performance of soccer player and to predict group performance for soccer players. A total of 184 of youth soccer players from Malaysia sport school and six soccer academy encompasses as respondence of the study. Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) were computed to identify the most dominant factors whereas reducing the initial 26 parameters with recommended >0.5 of factor loading. Meanwhile, prediction of the soccer performance was predicted by regression model. CFA revealed that sit and reach, vertical jump, VO2max, age, weight, height, sitting height, calf circumference (cc), medial upper arm circumference (muac), maturation, bicep, triceps, subscapular, suprailiac, 5M, 10M, and 20M speed were the most dominant factors. Further index analysis forming Youth Soccer Performance Index (YSPI) resulting by categorizing three groups namely, high, moderate, and low. The regression model for this study was significant set as p < 0.001 and R2 is 0.8222 which explained that the model contributed a total of 82% prediction ability to predict the whole set of the variables. The significant parameters in contributing prediction of YSPI are discussed. As a conclusion, the precision of the prediction models by integrating a multilateral factor reflecting for predicting potential soccer player and hopefully can create a competitive soccer games.

  9. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  10. Models for the financial-performance effects of Marketing

    NARCIS (Netherlands)

    Hanssens, D.M.; Dekimpe, Marnik; Wierenga, B.; van der Lans, R.

    We consider marketing-mix models that explicitly include financial performance criteria. These financial metrics are not only comparable across the marketing mix, they also relate well to investors’ evaluation of the firm. To that extent, we treat marketing as an investment in customer value

  11. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  12. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  13. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  14. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  15. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  16. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  17. Modeling the Effect of Bandwidth Allocation on Network Performance

    African Journals Online (AJOL)

    ... The proposed model showed improved performance for CDMA networks, but further increase in the bandwidth did not benefit the network; (iii) A reliability measure such as the spectral efficiency is therefore useful to redeem the limitation in (ii). Keywords: Coverage Capacity, CDMA, Mobile Network, Network Throughput ...

  18. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  19. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Jeon Soohong

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  20. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Soohong Jeon

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive ele- ments, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  1. Maintenance personnel performance simulation (MAPPS) model: overview and evaluation efforts

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.; Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Ryan, T.G.

    1984-01-01

    The development of the MAPPS model has been completed and the model is currently undergoing evaluation. These efforts are addressing a number of identified issues concerning practicality, acceptability, usefulness, and validity. Preliminary analysis of the evaluation data that has been collected indicates that MAPPS will provide comprehensive and reliable data for PRA purposes and for a number of other applications. The MAPPS computer simulation model provides the user with a sophisticated tool for gaining insights into tasks performed by NPP maintenance personnel. Its wide variety of input parameters and output data makes it extremely flexible for application to a number of diverse applications. With the demonstration of favorable model evaluation results, the MAPPS model will represent a valuable source of NPP maintainer reliability data and provide PRA studies with a source of data on maintainers that has previously not existed

  2. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, E.

    2017-12-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (an EC Copernicus contributing mission) EO-data archive, the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers (e.g. the EC Copernicus Global Land Service) and end-users. The analysis of time series of data (PB range) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. New features are still developed, but the platform is yet fully operational since November 2016 and offers A time series viewer (browser web client and API), showing the evolution of Proba-V bands and derived vegetation parameters for any country, region, pixel or polygon defined by the user. Full-resolution viewing services for the complete data archive. On-demand processing chains on a powerfull Hadoop/Spark backend. Virtual Machines can be requested by users with access to the complete data archive mentioned above and pre-configured tools to work with this data, e.g. various toolboxes and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. Jupyter Notebooks is available with some examples python and R projects worked out to show the potential of the data. Today the platform is already used by several international third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. From the Proba-V MEP, access to other data sources such as Sentinel-2 and landsat data is also addressed. Selected components of the MEP are also deployed on public cloud infrastructures in various R&D projects. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to

  3. Relationship between exploitation, oscillation, MSY and extinction.

    Science.gov (United States)

    Ghosh, Bapan; Kar, T K; Legovic, T

    2014-10-01

    We give answers to two important problems arising in current fisheries: (i) how maximum sustainable yield (MSY) policy is influenced by the initial population level, and (ii) how harvesting, oscillation and MSY are related to each other in prey-predator systems. To examine the impact of initial population on exploitation, we analyze a single species model with strong Allee effect. It is found that even when the MSY exists, the dynamic solution may not converge to the equilibrium stock if the initial population level is higher but near the critical threshold level. In a prey-predator system with Allee effect in the prey species, the initial population does not have such important impact neither on MSY nor on maximum sustainable total yield (MSTY). However, harvesting the top predator may cause extinction of all species if odd number of trophic levels exist in the ecosystem. With regard to the second problem, we study two prey-predator models and establish that increasing harvesting effort either on prey, predator or both prey and predator destroys previously existing oscillation. Moreover, equilibrium stock both at MSY and MSTY level is stable. We also discuss the validity of found results to other prey-predator systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Fostering the Exploitation of Open Educational Resources

    Directory of Open Access Journals (Sweden)

    Thomas Richter

    2014-07-01

    Full Text Available The central concept behind Open Educational Resources (OER is opening up the access to educational resources for stakeholders who are not the usual target user group. This concept must be perceived as innovative because it describes a general economic and social paradigm shift: Education, which formerly was limited to a specific group of learners, now, is promoted as a public good. However, despite very good intentions, internationally agreed quality standards, and the availability of the required technological infrastructure, the critical threshold is not yet met. Due to several reasons, the usefulness of OER is often limited to the originally targeted context. Questions arise if the existing quality standards for Technology Enhanced Learning (TEL actually meet the specific requirements within the OER value chain, if the existing quality standards are applicable to OER in a meaningful way, and under which conditions related standards generally could support the exploitation of OER.We analyze quality standards for TEL and contrast the life cycle model of commercial learning resources against the life cycle model of OER. We investigate special demands on quality from the context of OER and, taking the former results into account, derive emergent quality criteria for OER. The paper concludes with recommendations for the design of OER and a future standard development.

  5. Accelerating Large Data Analysis By Exploiting Regularities

    Science.gov (United States)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  6. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  7. Performance of GeantV EM Physics Models

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  8. Performance of GeantV EM Physics Models

    CERN Document Server

    Amadio, G; Apostolakis, J; Aurora, A; Bandieramonte, M; Bhattacharyya, A; Bianchini, C; Brun, R; Canal P; Carminati, F; Cosmo, G; Duhem, L; Elvira, D; Folger, G; Gheata, A; Gheata, M; Goulas, I; Iope, R; Jun, S Y; Lima, G; Mohanty, A; Nikitina, T; Novak, M; Pokorski, W; Ribon, A; Seghal, R; Shadura, O; Vallecorsa, S; Wenzel, S; Zhang, Y

    2017-01-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  9. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  10. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  11. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  12. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  13. A measurement-based performability model for a multiprocessor system

    Science.gov (United States)

    Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.

    1987-01-01

    A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.

  14. A personality trait-based interactionist model of job performance.

    Science.gov (United States)

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  15. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  16. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  17. A review of performance measurement’s maturity models

    Directory of Open Access Journals (Sweden)

    María Paula Bertolli

    2017-01-01

    Full Text Available Introduction: In a context as dynamic as today, SMEs need performance measurement systems (PMS that are able to generate useful, relevant and reliable information to manage. Measuring the maturity of PMS is an essential step to achieve its evolution to an ideal state that allows a better control of the results and to act consequently, improving management and decision making. Objective: To develop a bibliographic review to identify and characterize PMS maturity models, recognizing between them the most feasible models to apply in SMEs, in order to generate a contribution for the strengthening of such systems, facilitating effective and timely decision making in organizations. Methodology: The research question defined is: which existing PMS maturity model can be used by industrial SMEs? Google Scholar database was consulted for searching information, using certain search parameters. Based on a previous criteria definition, the selected models are compared. Finally, the conclusions about these models are elaborated. Results: From the results obtained through the bibliographic search in Google Scholar, different criteria were used to select the models to be characterized and compared. The four models selected were the proposed by Wettstein and Kueng, Van Aken, Tangen and Aho. Conclusions: The models considered most adequate are those proposed by Wettstein and Kueng (2002 and Aho (2012, due to their easy application and the low requirement of resource use. However, as such models do not have an evaluation tool, it has to be defined by the company.

  18. Exploiting GPUs in Virtual Machine for BioCloud

    OpenAIRE

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that ena...

  19. Herbivory eliminates fitness costs of mutualism exploiters.

    Science.gov (United States)

    Simonsen, Anna K; Stinchcombe, John R

    2014-04-01

    A common empirical observation in mutualistic interactions is the persistence of variation in partner quality and, in particular, the persistence of exploitative phenotypes. For mutualisms between hosts and symbionts, most mutualism theory assumes that exploiters always impose fitness costs on their host. We exposed legume hosts to mutualistic (nitrogen-fixing) and exploitative (non-nitrogen-fixing) symbiotic rhizobia in field conditions, and manipulated the presence or absence of insect herbivory to determine if the costly fitness effects of exploitative rhizobia are context-dependent. Exploitative rhizobia predictably reduced host fitness when herbivores were excluded. However, insects caused greater damage on hosts associating with mutualistic rhizobia, as a consequence of feeding preferences related to leaf nitrogen content, resulting in the elimination of fitness costs imposed on hosts by exploitative rhizobia. Our experiment shows that herbivory is potentially an important factor in influencing the evolutionary dynamic between legumes and rhizobia. Partner choice and host sanctioning are theoretically predicted to stabilize mutualisms by reducing the frequency of exploitative symbionts. We argue that herbivore pressure may actually weaken selection on choice and sanction mechanisms, thus providing one explanation of why host-based discrimination mechanisms may not be completely effective in eliminating nonbeneficial partners. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  20. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.