WorldWideScience

Sample records for hybrid-neural model application

  1. Adaptive control using a hybrid-neural model: application to a polymerisation reactor

    Directory of Open Access Journals (Sweden)

    Cubillos F.

    2001-01-01

    Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.

  2. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-02-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  3. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  4. Hybrid neural network model for the design of beam subjected to ...

    Indian Academy of Sciences (India)

    This paper demonstrates the applicability of Artificial Neural Networks (ANN) and Genetic Algorithms (GA) for the design of beams subjected to moment and shear. A hybrid neural network model which combines the features of feed forward neural networks and genetic algorithms has been developed for the design of beam ...

  5. A hybrid neural network model for noisy data regression.

    Science.gov (United States)

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  6. Hybrid neural network bushing model for vehicle dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Hyun [Pukyong National University, Busan (Korea, Republic of); Lee, Seung Kyu [Hyosung Corporation, Changwon (Korea, Republic of); Yoo, Wan Suk [Pusan National University, Busan (Korea, Republic of)

    2008-12-15

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  7. Hybrid Neural Network Model of an Industrial Ethanol Fermentation Process Considering the Effect of Temperature

    Science.gov (United States)

    Mantovanelli, Ivana C. C.; Rivera, Elmer Ccopa; da Costa, Aline C.; Filho, Rubens Maciel

    In this work a procedure for the development of a robust mathematical model for an industrial alcoholic fermentation process was evaluated. The proposed model is a hybrid neural model, which combines mass and energy balance equations with functional link networks to describe the kinetics. These networks have been shown to have a good nonlinear approximation capability, although the estimation of its weights is linear. The proposed model considers the effect of temperature on the kinetics and has the neural network weights reestimated always so that a change in operational conditions occurs. This allow to follow the system behavior when changes in operating conditions occur.

  8. A hybrid neural network structure for application to nondestructive TRU waste assay

    Energy Technology Data Exchange (ETDEWEB)

    Becker, G. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1995-12-31

    The determination of transuranic (TRU) and associated radioactive material quantities entrained in waste forms is a necessary component. of waste characterization. Measurement performance requirements are specified in the National TRU Waste Characterization Program quality assurance plan for which compliance must be demonstrated prior to the transportation and disposition of wastes. With respect to this criterion, the existing TRU nondestructive waste assay (NDA) capability is inadequate for a significant fraction of the US Department of Energy (DOE) complex waste inventory. This is a result of the general application of safeguard-type measurement and calibration schemes to waste form configurations. Incompatibilities between such measurement methods and actual waste form configurations complicate regulation compliance demonstration processes and illustrate the need for an alternate measurement interpretation paradigm. Hence, it appears necessary to supplement or perhaps restructure the perceived solution and approach to the waste NDA problem. The first step is to understand the magnitude of the waste matrix/source attribute space associated with those waste form configurations in inventory and how this creates complexities and unknowns with respect to existing NDA methods. Once defined and/or bounded, a conceptual method must be developed that specifies the necessary tools and the framework in which the tools are used. A promising framework is a hybridized neural network structure. Discussed are some typical complications associated with conventional waste NDA techniques and how improvements can be obtained through the application of neural networks.

  9. The nickel ion removal prediction model from aqueous solutions using a hybrid neural genetic algorithm.

    Science.gov (United States)

    Hoseinian, Fatemeh Sadat; Rezai, Bahram; Kowsari, Elaheh

    2017-12-15

    Prediction of Ni(II) removal during ion flotation is necessary for increasing the process efficiency by suitable modeling and simulation. In this regard, a new predictive model based on the hybrid neural genetic algorithm (GANN) was developed to predict the Ni(II) ion removal and water removal during the process from aqueous solutions using ion flotation. A multi-layer GANN model was trained to develop a predictive model based on the important effective variables on the Ni(II) ion flotation. The input variables of the model were pH, collector concentration, frother concentration, impeller speed and flotation time, while the removal percentage of Ni(II) ions and water during ion flotation were the outputs. The most effective input variables on Ni(II) removal and water removal were evaluated using the sensitivity analysis. The sensitivity analysis of the model shows that all input variables have a significant impact on the outputs. The results show that the proposed GANN models can be used to predict the Ni(II) removal and water removal during ion flotation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  11. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    Directory of Open Access Journals (Sweden)

    Hani Omar

    2016-01-01

    Full Text Available Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  12. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  13. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  14. A Hybrid Neural Network and H-P Filter Model for Short-Term Vegetable Price Forecasting

    Directory of Open Access Journals (Sweden)

    Youzhu Li

    2014-01-01

    Full Text Available This paper is concerned with time series data for vegetable prices, which have a great impact on human’s life. An accurate forecasting method for prices and an early-warning system in the vegetable market are an urgent need in people’s daily lives. The time series price data contain both linear and nonlinear patterns. Therefore, neither a current linear forecasting nor a neural network can be adequate for modeling and predicting the time series data. The linear forecasting model cannot deal with nonlinear relationships, while the neural network model alone is not able to handle both linear and nonlinear patterns at the same time. The linear Hodrick-Prescott (H-P filter can extract the trend and cyclical components from time series data. We predict the linear and nonlinear patterns and then combine the two parts linearly to produce a forecast from the original data. This study proposes a structure of a hybrid neural network based on an H-P filter that learns the trend and seasonal patterns separately. The experiment uses vegetable prices data to evaluate the model. Comparisons with the autoregressive integrated moving average method and back propagation artificial neural network methods show that our method has higher accuracy than the others.

  15. A multi-scale hybrid neural network retrieval model for dust storm detection, a study in Asia

    Science.gov (United States)

    Wong, Man Sing; Xiao, Fei; Nichol, Janet; Fung, Jimmy; Kim, Jhoon; Campbell, James; Chan, P. W.

    2015-05-01

    Dust storms are known to have adverse effects on human health and significant impact on weather, air quality, hydrological cycle, and ecosystem. Atmospheric dust loading is also one of the large uncertainties in global climate modeling, due to its significant impact on the radiation budget and atmospheric stability. Observations of dust storms in humid tropical south China (e.g. Hong Kong), are challenging due to high industrial pollution from the nearby Pearl River Delta region. This study develops a method for dust storm detection by combining ground station observations (PM10 concentration, AERONET data), geostationary satellite images (MTSAT), and numerical weather and climatic forecasting products (WRF/Chem). The method is based on a hybrid neural network (NN) retrieval model for two scales: (i) a NN model for near real-time detection of dust storms at broader regional scale; (ii) a NN model for detailed dust storm mapping for Hong Kong and Taiwan. A feed-forward multilayer perceptron (MLP) NN, trained using back propagation (BP) algorithm, was developed and validated by the k-fold cross validation approach. The accuracy of the near real-time detection MLP-BP network is 96.6%, and the accuracies for the detailed MLP-BP neural network for Hong Kong and Taiwan is 74.8%. This newly automated multi-scale hybrid method can be used to give advance near real-time mapping of dust storms for environmental authorities and the public. It is also beneficial for identifying spatial locations of adverse air quality conditions, and estimates of low visibility associated with dust events for port and airport authorities.

  16. Determination of inhibition in the enzymatic hydrolysis of cellobiose using hybrid neural modeling

    Directory of Open Access Journals (Sweden)

    F. C. Corazza

    2005-03-01

    Full Text Available Neural networks and hybrid models were used to study substrate and product inhibition observed in the enzymatic hydrolysis of cellobiose at 40ºC, 50ºC and 55ºC, pH 4.8, using cellobiose solutions with or without the addition of exogenous glucose. Firstly, the initial velocity method and nonlinear fitting with StatisticaÒ were used to determine the kinetic parameters for either the uncompetitive or the competitive substrate inhibition model at a negligible product concentration and cellobiose from 0.4 to 2.0 g/L. Secondly, for six different models of substrate and product inhibitions and data for low to high cellobiose conversions in a batch reactor, neural networks were used for fitting the product inhibition parameter to the mass balance equations derived for each model. The two models found to be best were: 1 noncompetitive inhibition by substrate and competitive by product and 2 uncompetitive inhibition by substrate and competitive by product; however, these models’ correlation coefficients were quite close. To distinguish between them, hybrid models consisting of neural networks and first principle equations were used to select the best inhibition model based on the smallest norm observed, and the model with noncompetitive inhibition by substrate and competitive inhibition by product was shown to be the best predictor of cellobiose hydrolysis reactor behavior.

  17. Hybrid neural modelling of an anaerobic digester with respect to biological constraints.

    Science.gov (United States)

    Karama, A; Bernard, O; Gouzé, J L; Benhammou, A; Dochain, D

    2001-01-01

    A hybrid model for an anaerobic digestion process is proposed. The fermentation is assumed to be performed in two steps, acidogenesis and methanogenesis, by two bacterial populations. The model is based on mass balance equations, and the bacterial growth rates are represented by neural networks. In order to guarantee the biological meaning of the hybrid model (positivity of the concentrations, boundedness, saturation or inhibition of the growth rates) outside the training data set, a method that imposes constraints in the neural network is proposed. The method is applied to experimental data from a fixed bed reactor.

  18. Hybrid neural network-phenomenological model to calculate the separation parameters of a gas centrifuge; Calculo dos parametros separativos de centrifuga a gas atraves de modelos de redes neurais hibridas

    Energy Technology Data Exchange (ETDEWEB)

    Crus, Maria Ursulina de L. [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil)]. E-mail: lirusbr@yahoo.com.br; Nascimento, Claudio A.O. [Sao Paulo Univ., SP (Brazil). Escola Politecnica. Dept. de Engenharia Quimica]. E-mail: oller@usp.br; Migliavacca, Sylvana C.P. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)]. E-mail: scavedon@net.ipen.br

    2005-07-01

    The neural network model for the prediction and optimization of the separation parameters of a gas centrifuge for uranium enrichment has been proved to be an efficient tool. But it requires a reliable data base and an extensive knowledge of the experimenter to define the input and output variables. In the present work, the authors use a hybrid neural network phenomenological approach to model the separative parameters of a gas centrifuge. The neural network part of the model is constituted of a feed forward neural network, trained with the back-propagation algorithm that receives the input data and calculates the internal parameters that can not be measured experimentally, for the phenomenological part of the model. This last one, receives the input data plus the internal parameters calculated by the neural network, solves a system of equations defined by the diffusion equation, a differential equation, mass balances and the value balance to give finally the separative power of the centrifuge. This kind of model has the advantages of the neural network to learn and generalize a process from a data set yet can assure that the basic balances and physical conditions are respected. This approach in can consider the neural network as a parameter estimator for the phenomenological model, or can consider the phenomenological set of equations as a restriction for the neural network. (author)

  19. Clinical application of modified bag-of-features coupled with hybrid neural-based classifier in dengue fever classification using gene expression data.

    Science.gov (United States)

    Chatterjee, Sankhadeep; Dey, Nilanjan; Shi, Fuqian; Ashour, Amira S; Fong, Simon James; Sen, Soumya

    2017-09-11

    Dengue fever detection and classification have a vital role due to the recent outbreaks of different kinds of dengue fever. Recently, the advancement in the microarray technology can be employed for such classification process. Several studies have established that the gene selection phase takes a significant role in the classifier performance. Subsequently, the current study focused on detecting two different variations, namely, dengue fever (DF) and dengue hemorrhagic fever (DHF). A modified bag-of-features method has been proposed to select the most promising genes in the classification process. Afterward, a modified cuckoo search optimization algorithm has been engaged to support the artificial neural (ANN-MCS) to classify the unknown subjects into three different classes namely, DF, DHF, and another class containing convalescent and normal cases. The proposed method has been compared with other three well-known classifiers, namely, multilayer perceptron feed-forward network (MLP-FFN), artificial neural network (ANN) trained with cuckoo search (ANN-CS), and ANN trained with PSO (ANN-PSO). Experiments have been carried out with different number of clusters for the initial bag-of-features-based feature selection phase. After obtaining the reduced dataset, the hybrid ANN-MCS model has been employed for the classification process. The results have been compared in terms of the confusion matrix-based performance measuring metrics. The experimental results indicated a highly statistically significant improvement with the proposed classifier over the traditional ANN-CS model.

  20. Simulation of Missile Autopilot with Two-Rate Hybrid Neural Network System

    Directory of Open Access Journals (Sweden)

    ASTROV, I.

    2007-04-01

    Full Text Available This paper proposes a two-rate hybrid neural network system, which consists of two artificial neural network subsystems. These neural network subsystems are used as the dynamic subsystems controllers.1 This is because such neuromorphic controllers are especially suitable to control complex systems. An illustrative example - two-rate neural network hybrid control of decomposed stochastic model of a rigid guided missile over different operating conditions - was carried out using the proposed two-rate state-space decomposition technique. This example demonstrates that this research technique results in simplified low-order autonomous control subsystems with various speeds of actuation, and shows the quality of the proposed technique. The obtained results show that the control tasks for the autonomous subsystems can be solved more qualitatively than for the original system. The simulation and animation results with use of software package Simulink demonstrate that this research technique would work for real-time stochastic systems.

  1. A hybrid neural network system for prediction and recognition of promoter regions in human genome.

    Science.gov (United States)

    Chen, Chuan-Bo; Li, Tao

    2005-05-01

    This paper proposes a high specificity and sensitivity algorithm called PromPredictor for recognizing promoter regions in the human genome. PromPredictor extracts compositional features and CpG islands information from genomic sequence, feeding these features as input for a hybrid neural network system (HNN) and then applies the HNN for prediction. It combines a novel promoter recognition model, coding theory, feature selection and dimensionality reduction with machine learning algorithm. Evaluation on Human chromosome 22 was approximately 66% in sensitivity and approximately 48% in specificity. Comparison with two other systems revealed that our method had superior sensitivity and specificity in predicting promoter regions. PromPredictor is written in MATLAB and requires Matlab to run. PromPredictor is freely available at http://www.whtelecom.com/Prompredictor.htm.

  2. Hybrid neural network model for the design of beam subjected to ...

    Indian Academy of Sciences (India)

    MS received 25 September 2006; revised 8 March 2007. Abstract. There is no direct method for design of beams. In general the dimensions of the beam and reinforcement are initially assumed and then the interaction formula is used to verify the suitability of chosen dimensions. This approach necessitates few trials for ...

  3. Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space

    Science.gov (United States)

    Berenji, Hamid R.; Castellano, Timothy

    1991-01-01

    The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.

  4. Distributed Parameter Modelling Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    and the development of a short-path evaporator. The oil shale processing problem illustrates the interplay amongst particle flows in rotating drums, heat and mass transfer between solid and gas phases. The industrial application considers the dynamics of an Alberta-Taciuk processor, commonly used in shale oil and oil......Here the issue of distributed parameter models is addressed. Spatial variations as well as time are considered important. Several applications for both steady state and dynamic applications are given. These relate to the processing of oil shale, the granulation of industrial fertilizers...

  5. Model theory and applications

    CERN Document Server

    Belegradek, OV

    1999-01-01

    This volume is a collection of papers on model theory and its applications. The longest paper, "Model Theory of Unitriangular Groups" by O. V. Belegradek, forms a subtle general theory behind Mal‴tsev's famous correspondence between rings and groups. This is the first published paper on the topic. Given the present model-theoretic interest in algebraic groups, Belegradek's work is of particular interest to logicians and algebraists. The rest of the collection consists of papers on various questions of model theory, mainly on stability theory. Contributors are leading Russian researchers in the

  6. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  7. Mathematical modeling with multidisciplinary applications

    CERN Document Server

    Yang, Xin-She

    2013-01-01

    Features mathematical modeling techniques and real-world processes with applications in diverse fields Mathematical Modeling with Multidisciplinary Applications details the interdisciplinary nature of mathematical modeling and numerical algorithms. The book combines a variety of applications from diverse fields to illustrate how the methods can be used to model physical processes, design new products, find solutions to challenging problems, and increase competitiveness in international markets. Written by leading scholars and international experts in the field, the

  8. Finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    Features step-by-step examples based on actual data and connects fundamental mathematical modeling skills and decision making concepts to everyday applicability Featuring key linear programming, matrix, and probability concepts, Finite Mathematics: Models and Applications emphasizes cross-disciplinary applications that relate mathematics to everyday life. The book provides a unique combination of practical mathematical applications to illustrate the wide use of mathematics in fields ranging from business, economics, finance, management, operations research, and the life and social sciences.

  9. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  10. A Hybrid Neural Network and Virtual Reality System for Spatial Language Processing

    OpenAIRE

    Martinez, Guillermina; Cangelosi, Angelo; Coventry, Kenny

    2001-01-01

    This paper describes a neural network model for the study of spatial language. It deals with both geometric and functional variables, which have been shown to play an important role in the comprehension of spatial prepositions. The network is integrated with a virtual reality interface for the direct manipulation of geometric and functional factors. The training uses experimental stimuli and data. Results show that the networks reach low training and generalization errors. Cluster analyses of...

  11. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    and selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...

  12. Hybrid neural intelligent system to predict business failure in small-to-medium-size enterprises.

    Science.gov (United States)

    Borrajo, M Lourdes; Baruque, Bruno; Corchado, Emilio; Bajo, Javier; Corchado, Juan M

    2011-08-01

    During the last years there has been a growing need of developing innovative tools that can help small to medium sized enterprises to predict business failure as well as financial crisis. In this study we present a novel hybrid intelligent system aimed at monitoring the modus operandi of the companies and predicting possible failures. This system is implemented by means of a neural-based multi-agent system that models the different actors of the companies as agents. The core of the multi-agent system is a type of agent that incorporates a case-based reasoning system and automates the business control process and failure prediction. The stages of the case-based reasoning system are implemented by means of web services: the retrieval stage uses an innovative weighted voting summarization of self-organizing maps ensembles-based method and the reuse stage is implemented by means of a radial basis function neural network. An initial prototype was developed and the results obtained related to small and medium enterprises in a real scenario are presented.

  13. Modeling Philosophies and Applications

    Science.gov (United States)

    All models begin with a framework and a set of assumptions and limitations that go along with that framework. In terms of fracing and RA, there are several places where models and parameters must be chosen to complete hazard identification.

  14. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...... presented at these 6 workshops. The contributions were organized in topical sections named: modelling practices; new ways of behaviour modelling: events in modelling; and new ways of behaviour modelling: protocol modelling....

  15. Engine Modelling for Control Applications

    DEFF Research Database (Denmark)

    Hendricks, Elbert

    1997-01-01

    In earlier work published by the author and co-authors, a dynamic engine model called a Mean Value Engine Model (MVEM) was developed. This model is physically based and is intended mainly for control applications. In its newer form, it is easy to fit to many different engines and requires little...... engine data for this purpose. It is especially well suited to embedded model applications in engine controllers, such as nonlinear observer based air/fuel ratio and advanced idle speed control. After a brief review of this model, it will be compared with other similar models which can be found...

  16. Multilevel models applications using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James F

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readers to understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. It is at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®. Examples are drawn from analysis of real-world research data.

  17. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo; Heitzig, Martina

    2011-01-01

    This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor. T...

  18. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    to be applicable in organisations assigning a high importance to one or more factors that are known to be impacted by RD, while also experiencing a high level of occurrence of this factor. The RDAM supplements existing maturity models and metrics to provide a comprehensive set of data to support management......This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities...... decisions. The factors in the RDAM were derived by analysing a combination of RD literature and industrial cases involving RD. The RDAM is used on a case company to illustrate its use....

  19. A Hybrid Neural Network Approach for Kinematic Modeling of a Novel 6-UPS Parallel Human-Like Mastication Robot

    Directory of Open Access Journals (Sweden)

    Hadi Kalani

    2016-04-01

    Full Text Available Introduction we aimed to introduce a 6-universal-prismatic-spherical (UPS parallel mechanism for the human jaw motion and theoretically evaluate its kinematic problem. We proposed a strategy to provide a fast and accurate solution to the kinematic problem. The proposed strategy could accelerate the process of solution-finding for the direct kinematic problem by reducing the number of required iterations in order to reach the desired accuracy level. Materials and Methods To overcome the direct kinematic problem, an artificial neural network and third-order Newton-Raphson algorithm were combined to provide an improved hybrid method. In this method, approximate solution was presented for the direct kinematic problem by the neural network. This solution could be considered as the initial guess for the third-order Newton-Raphson algorithm to provide an answer with the desired level of accuracy. Results The results showed that the proposed combination could help find a approximate solution and reduce the execution time for the direct kinematic problem, The results showed that muscular actuations showed periodic behaviors, and the maximum length variation of temporalis muscle was larger than that of masseter and pterygoid muscles. By reducing the processing time for solving the direct kinematic problem, more time could be devoted to control calculations.. In this method, for relatively high levels of accuracy, the number of iterations and computational time decreased by 90% and 34%, respectively, compared to the conventional Newton method. Conclusion The present analysis could allow researchers to characterize and study the mastication process by specifying different chewing patterns (e.g., muscle displacements.

  20. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  1. Behavior Modeling -- Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes revised selected papers from the six International Workshops on Behavior Modelling - Foundations and Applications, BM-FA, which took place annually between 2009 and 2014. The 9 papers presented in this volume were carefully reviewed and selected from a total of 58 papers...

  2. Melvin Defleur's Information Communication Model: Its Application ...

    African Journals Online (AJOL)

    The paper discusses Melvin Defleur's information communication model and its application to archives administration. It provides relevant examples in which archives administration functions involve the communication process. Specific model elements and their application in archives administration are highlighted.

  3. Conceptual Model of User Adaptive Enterprise Application

    Directory of Open Access Journals (Sweden)

    Inese Šūpulniece

    2015-07-01

    Full Text Available The user adaptive enterprise application is a software system, which adapts its behavior to an individual user on the basis of nontrivial inferences from information about the user. The objective of this paper is to elaborate a conceptual model of the user adaptive enterprise applications. In order to conceptualize the user adaptive enterprise applications, their main characteristics are analyzed, the meta-model defining the key concepts relevant to these applications is developed, and the user adaptive enterprise application and its components are defined in terms of the meta-model. Modeling of the user adaptive enterprise application incorporates aspects of enterprise modeling, application modeling, and design of adaptive characteristics of the application. The end-user and her expectations are identified as two concepts of major importance not sufficiently explored in the existing research. Understanding these roles improves the adaptation result in the user adaptive applications.

  4. Using Technology in Applications and Modelling

    Science.gov (United States)

    Kadijevich, Djordje; Haapasalo, Lenni; Hvorecky, Jozef

    2005-01-01

    To help mathematics educators realise the power of computer-based applications and modelling, this paper deals with four questions: (a) What implications does technology have for the range of applications and modelling problems that can be introduced? (b) In what cases does technology facilitate the learning of applications and modelling? (c) When…

  5. Covariance Models for Hydrological Applications

    Science.gov (United States)

    Hristopulos, Dionissios

    2014-05-01

    This methodological contribution aims to present some new covariance models with applications in the stochastic analysis of hydrological processes. More specifically, we present explicit expressions for radially symmetric, non-differentiable, Spartan covariance functions in one, two, and three dimensions. The Spartan covariance parameters include a characteristic length, an amplitude coefficient, and a rigidity coefficient which determines the shape of the covariance function. Different expressions are obtained depending on the value of the rigidity coefficient and the dimensionality. If the value of the rigidity coefficient is much larger than one, the Spartan covariance function exhibits multiscaling. Spartan covariance models are more flexible than the classical geostatatistical models (e.g., spherical, exponential). Their non-differentiability makes them suitable for modelling the properties of geological media. We also present a family of radially symmetric, infinitely differentiable Bessel-Lommel covariance functions which are valid in any dimension. These models involve combinations of Bessel and Lommel functions. They provide a generalization of the J-Bessel covariance function, and they can be used to model smooth processes with an oscillatory decay of correlations. We discuss the dependence of the integral range of the Spartan and Bessel-Lommel covariance functions on the parameters. We point out that the dependence is not uniquely specified by the characteristic length, unlike the classical geostatistical models. Finally, we define and discuss the use of the generalized spectrum for characterizing different correlation length scales; the spectrum is defined in terms of an exponent α. We show that the spectrum values obtained for exponent values less than one can be used to discriminate between mean-square continuous but non-differentiable random fields. References [1] D. T. Hristopulos and S. Elogne, 2007. Analytic properties and covariance functions of

  6. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  7. Chemistry Teachers' Knowledge and Application of Models

    Science.gov (United States)

    Wang, Zuhao; Chi, Shaohui; Hu, Kaiyan; Chen, Wenting

    2014-01-01

    Teachers' knowledge and application of model play an important role in students' development of modeling ability and scientific literacy. In this study, we investigated Chinese chemistry teachers' knowledge and application of models. Data were collected through test questionnaire and analyzed quantitatively and qualitatively. The result indicated…

  8. A Classification of PLC Models and Applications

    NARCIS (Netherlands)

    Mader, Angelika H.; Boel, R.; Stremersch, G.

    In the past years there is an increasing interest in analysing PLC applications with formal methods. The first step to this end is to get formal models of PLC applications. Meanwhile, various models for PLCs have already been introduced in the literature. In our paper we discuss several

  9. The functionality-based application confinement model

    OpenAIRE

    Schreuders, ZC; Payne, C.; Mcgill, T.

    2013-01-01

    This paper presents the functionality-based application confinement (FBAC) access control model. FBAC is an application-oriented access control model, intended to restrict processes to the behaviour that is authorised by end users, administrators, and processes, in order to limit the damage that can be caused by malicious code, due to software vulnerabilities or malware. FBAC is unique in its ability to limit applications to finely grained access control rules based on high-level easy-to-unde...

  10. Structural equation modeling methods and applications

    CERN Document Server

    Wang, Jichuan

    2012-01-01

    A reference guide for applications of SEM using Mplus Structural Equation Modeling: Applications Using Mplus is intended as both a teaching resource and a reference guide. Written in non-mathematical terms, this book focuses on the conceptual and practical aspects of Structural Equation Modeling (SEM). Basic concepts and examples of various SEM models are demonstrated along with recently developed advanced methods, such as mixture modeling and model-based power analysis and sample size estimate for SEM. The statistical modeling program, Mplus, is also featured and provides researchers with a

  11. Association models for petroleum applications

    DEFF Research Database (Denmark)

    Kontogeorgis, Georgios

    2013-01-01

    Thermodynamics plays an important role in many applications in the petroleum industry, both upstream and downstream, ranging from flow assurance, (enhanced) oil recovery and control of chemicals to meet production and environmental regulations. There are many different applications in the oil & gas...... industry, thus thermodynamic data (phase behaviour, densities, speed of sound, etc) are needed to study a very diverse range of compounds in addition to the petroleum ones (CO2, H2S, water, alcohols, glycols, mercaptans, mercury, asphaltenes, waxes, polymers, electrolytes, biofuels, etc) within a very...

  12. PEM Fuel Cells - Fundamentals, Modeling and Applications

    OpenAIRE

    Maher A.R. Sadiq Al-Baghdadi

    2013-01-01

    Part I: Fundamentals Chapter 1: Introduction. Chapter 2: PEM fuel cell thermodynamics, electrochemistry, and performance. Chapter 3: PEM fuel cell components. Chapter 4: PEM fuel cell failure modes. Part II: Modeling and Simulation Chapter 5: PEM fuel cell models based on semi-empirical simulation. Chapter 6: PEM fuel cell models based on computational fluid dynamics. Part III: Applications Chapter 7: PEM fuel cell system design and applications.

  13. Practical application of inventory models

    OpenAIRE

    RŮŽIČKOVÁ, Lucie

    2016-01-01

    This bachelor thesis is based on finding a suitable method of supplying a company. The emphasis lays on gaining data about inventory, supplying and demand of a company. The teoretical part deals with the allocation of inventory, with the determination of inventory managment models that can be applied in practice, and with demand and costs. The practical part deals with the analysis of stocks in the company, with used model inventory management and with finding of demand. For finding the model...

  14. Source characterization refinements for routine modeling applications

    Science.gov (United States)

    Paine, Robert; Warren, Laura L.; Moore, Gary E.

    2016-03-01

    Steady-state dispersion models recommended by various environmental agencies worldwide have generally been evaluated with traditional stack release databases, including tracer studies. The sources associated with these field data are generally those with isolated stacks or release points under relatively ideal conditions. Many modeling applications, however, involve sources that act to modify the local dispersion environment as well as the conditions associated with plume buoyancy and final plume rise. The source characterizations affecting plume rise that are introduced and discussed in this paper include: 1) sources with large fugitive heat releases that result in a local urbanized effect, 2) stacks on or near individual buildings with large fugitive heat releases that tend to result in buoyant "liftoff" effects counteracting aerodynamic downwash effects, 3) stacks with considerable moisture content, which leads to additional heat of condensation during plume rise - an effect that is not considered by most dispersion models, and 4) stacks in a line that result in at least partial plume merging and buoyancy enhancement under certain conditions. One or more of these effects are appropriate for a given modeling application. We present examples of specific applications for one or more of these procedures in the paper. This paper describes methods to introduce the four source characterization approaches to more accurately simulate plume rise to a variety of dispersion models. The authors have focused upon applying these methods to the AERMOD modeling system, which is the United States Environmental Protection Agency's preferred model in addition to being used internationally, but the techniques are applicable to dispersion models worldwide. While the methods could be installed directly into specific models such as AERMOD, the advantage of implementing them outside the model is to allow them to be applicable to numerous models immediately and also to allow them to

  15. Photocell modelling for thermophotovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Mayor, J.-C.; Durisch, W.; Grob, B.; Panitz, J.-C. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Goal of the modelling described here is the extrapolation of the performance characteristics of solar photocells to TPV working conditions. The model accounts for higher flux of radiation and for the higher temperatures reached in TPV converters. (author) 4 figs., 1 tab., 2 refs.

  16. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  17. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    In this paper, system identification applications of Hammerstein model that is cascade of nonlinear second order volterra and linear FIR model are studied. Recursive least square algorithm is used to identify the proposed Hammerstein model parameters. Furthermore, the results are compared to identify the success of ...

  18. Dynamic programming models and applications

    CERN Document Server

    Denardo, Eric V

    2003-01-01

    Introduction to sequential decision processes covers use of dynamic programming in studying models of resource allocation, methods for approximating solutions of control problems in continuous time, production control, more. 1982 edition.

  19. A model for assessment of telemedicine applications

    DEFF Research Database (Denmark)

    Kidholm, Kristian; Ekeland, Anne Granstrøm; Jensen, Lise Kvistgaard

    2012-01-01

    Telemedicine applications could potentially solve many of the challenges faced by the healthcare sectors in Europe. However, a framework for assessment of these technologies is need by decision makers to assist them in choosing the most efficient and cost-effective technologies. Therefore in 2009...... the European Commission initiated the development of a framework for assessing telemedicine applications, based on the users' need for information for decision making. This article presents the Model for ASsessment of Telemedicine applications (MAST) developed in this study....

  20. Computational nanophotonics modeling and applications

    CERN Document Server

    Musa, Sarhan M

    2013-01-01

    This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.

  1. Contact modeling for robotics applications

    Energy Technology Data Exchange (ETDEWEB)

    Lafarge, R.A.; Lewis, C.

    1998-08-01

    At Sandia National Laboratories (SNL), the authors are developing the ability to accurately predict motions for arbitrary numbers of bodies of arbitrary shapes experiencing multiple applied forces and intermittent contacts. In particular, the authors are concerned with the simulation of systems such as part feeders or mobile robots operating in realistic environments. Preliminary investigation of commercial dynamics software packages led them to the conclusion that they could use commercial software to provide everything they needed except for the contact model. They found that ADAMS best fit their needs for a simulation package. To simulate intermittent contacts, they need collision detection software that can efficiently compute the distances between non-convex objects and return the associated witness features. They also require a computationally efficient contact model for rapid simulation of impact, sustained contact under load, and transition to and from contact conditions. This paper provides a technical review of a custom hierarchical distance computation engine developed at Sandia, called the C-Space Toolkit (CSTk). In addition, they describe an efficient contact model using a non-linear damping term developed by SNL and Ohio State. Both the CSTk and the non-linear damper have been incorporated in a simplified two-body testbed code, which is used to investigate how to correctly model the contact using these two utilities. They have incorporated this model into the ADAMS software using the callable function interface. An example that illustrates the capabilities of the 9.02 release of ADAMS with their extensions is provided.

  2. Registry of EPA Applications, Models, and Databases

    Data.gov (United States)

    U.S. Environmental Protection Agency — READ is EPA's authoritative source for information about Agency information resources, including applications/systems, datasets and models. READ is one component of...

  3. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  4. GEOCITY model: description and application

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, C.L.; Bloomster, C.H.

    1977-06-01

    GEOCITY is a computer simulation model developed to study the economics of district heating using geothermal energy. GEOCITY calculates the cost of district heating based on climate, population, resource characteristics, and financing conditions. The principal input variables are minimum temperature, heating degree-days, population size and density, resource temperature and distance from load center, and the interest rate. From this input data the model designs the transmission and district heating systems. From this design, GEOCITY calculates the capital and operating costs for the entire system, including the production and disposal of the goethermal water.

  5. Formal models, languages and applications

    CERN Document Server

    Rangarajan, K; Mukund, M

    2006-01-01

    A collection of articles by leading experts in theoretical computer science, this volume commemorates the 75th birthday of Professor Rani Siromoney, one of the pioneers in the field in India. The articles span the vast range of areas that Professor Siromoney has worked in or influenced, including grammar systems, picture languages and new models of computation. Sample Chapter(s). Chapter 1: Finite Array Automata and Regular Array Grammars (150 KB). Contents: Finite Array Automata and Regular Array Grammars (A Atanasiu et al.); Hexagonal Contextual Array P Systems (K S Dersanambika et al.); Con

  6. Applications of computer modeling to fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, J.M.

    1989-01-01

    Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

  7. Storm Water Management Model Applications Manual

    Science.gov (United States)

    The EPA Storm Water Management Model (SWMM) is a dynamic rainfall-runoff simulation model that computes runoff quantity and quality from primarily urban areas. This manual is a practical application guide for new SWMM users who have already had some previous training in hydrolog...

  8. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  9. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  10. Markov and mixed models with applications

    DEFF Research Database (Denmark)

    Mortensen, Stig Bousgaard

    This thesis deals with mathematical and statistical models with focus on applications in pharmacokinetic and pharmacodynamic (PK/PD) modelling. These models are today an important aspect of the drug development in the pharmaceutical industry and continued research in statistical methodology within...... or uncontrollable factors in an individual. Modelling using SDEs also provides new tools for estimation of unknown inputs to a system and is illustrated with an application to estimation of insulin secretion rates in diabetic patients. Models for the eect of a drug is a broader area since drugs may affect...... the individual in almost any thinkable way. This project focuses on measuring the eects on sleep in both humans and animals. The sleep process is usually analyzed by categorizing small time segments into a number of sleep states and this can be modelled using a Markov process. For this purpose new methods...

  11. Polymer networks: Modeling and applications

    Science.gov (United States)

    Masoud, Hassan

    Polymer networks are an important class of materials that are ubiquitously found in natural, biological, and man-made systems. The complex mesoscale structure of these soft materials has made it difficult for researchers to fully explore their properties. In this dissertation, we introduce a coarse-grained computational model for permanently cross-linked polymer networks than can properly capture common properties of these materials. We use this model to study several practical problems involving dry and solvated networks. Specifically, we analyze the permeability and diffusivity of polymer networks under mechanical deformations, we examine the release of encapsulated solutes from microgel capsules during volume transitions, and we explore the complex tribological behavior of elastomers. Our simulations reveal that the network transport properties are defined by the network porosity and by the degree of network anisotropy due to mechanical deformations. In particular, the permeability of mechanically deformed networks can be predicted based on the alignment of network filaments that is characterized by a second order orientation tensor. Moreover, our numerical calculations demonstrate that responsive microcapsules can be effectively utilized for steady and pulsatile release of encapsulated solutes. We show that swollen gel capsules allow steady, diffusive release of nanoparticles and polymer chains, whereas gel deswelling causes burst-like discharge of solutes driven by an outward flow of the solvent initially enclosed within a shrinking capsule. We further demonstrate that this hydrodynamic release can be regulated by introducing rigid microscopic rods in the capsule interior. We also probe the effects of velocity, temperature, and normal load on the sliding of elastomers on smooth and corrugated substrates. Our friction simulations predict a bell-shaped curve for the dependence of the friction coefficient on the sliding velocity. Our simulations also illustrate

  12. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  13. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  14. Degenerate RFID Channel Modeling for Positioning Applications

    Directory of Open Access Journals (Sweden)

    A. Povalac

    2012-12-01

    Full Text Available This paper introduces the theory of channel modeling for positioning applications in UHF RFID. It explains basic parameters for channel characterization from both the narrowband and wideband point of view. More details are given about ranging and direction finding. Finally, several positioning scenarios are analyzed with developed channel models. All the described models use a degenerate channel, i.e. combined signal propagation from the transmitter to the tag and from the tag to the receiver.

  15. Application Note: Power Grid Modeling With Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  16. Application of SIR epidemiological model: new trends

    CERN Document Server

    Rodrigues, Helena Sofia

    2016-01-01

    The simplest epidemiologic model composed by mutually exclusive compartments SIR (susceptible-infected-susceptible) is presented to describe a reality. From health concerns to situations related with marketing, informatics or even sociology, several are the fields that are using this epidemiological model as a first approach to better understand a situation. In this paper, the basic transmission model is analyzed, as well as simple tools that allows us to extract a great deal of information about possible solutions. A set of applications - traditional and new ones - is described to show the importance of this model.

  17. Advances and applications of occupancy models

    Science.gov (United States)

    Bailey, Larissa; MacKenzie, Darry I.; Nichols, James D.

    2013-01-01

    Summary: The past decade has seen an explosion in the development and application of models aimed at estimating species occurrence and occupancy dynamics while accounting for possible non-detection or species misidentification. We discuss some recent occupancy estimation methods and the biological systems that motivated their development. Collectively, these models offer tremendous flexibility, but simultaneously place added demands on the investigator. Unlike many mark–recapture scenarios, investigators utilizing occupancy models have the ability, and responsibility, to define their sample units (i.e. sites), replicate sampling occasions, time period over which species occurrence is assumed to be static and even the criteria that constitute ‘detection’ of a target species. Subsequent biological inference and interpretation of model parameters depend on these definitions and the ability to meet model assumptions. We demonstrate the relevance of these definitions by highlighting applications from a single biological system (an amphibian–pathogen system) and discuss situations where the use of occupancy models has been criticized. Finally, we use these applications to suggest future research and model development.

  18. Ionospheric Modeling for Precise GNSS Applications

    OpenAIRE

    Memarzadeh, Y.

    2009-01-01

    The main objective of this thesis is to develop a procedure for modeling and predicting ionospheric Total Electron Content (TEC) for high precision differential GNSS applications. As the ionosphere is a highly dynamic medium, we believe that to have a reliable procedure it is necessary to transfer the high temporal resolution GNSS network data into the spatial domain. This objective led to the development of a recursive physics-based model for the regular TEC variations and an algorithm for r...

  19. Deformation Models Tracking, Animation and Applications

    CERN Document Server

    Torres, Arnau; Gómez, Javier

    2013-01-01

    The computational modelling of deformations has been actively studied for the last thirty years. This is mainly due to its large range of applications that include computer animation, medical imaging, shape estimation, face deformation as well as other parts of the human body, and object tracking. In addition, these advances have been supported by the evolution of computer processing capabilities, enabling realism in a more sophisticated way. This book encompasses relevant works of expert researchers in the field of deformation models and their applications.  The book is divided into two main parts. The first part presents recent object deformation techniques from the point of view of computer graphics and computer animation. The second part of this book presents six works that study deformations from a computer vision point of view with a common characteristic: deformations are applied in real world applications. The primary audience for this work are researchers from different multidisciplinary fields, s...

  20. Ionospheric Modeling for Precise GNSS Applications

    NARCIS (Netherlands)

    Memarzadeh, Y.

    2009-01-01

    The main objective of this thesis is to develop a procedure for modeling and predicting ionospheric Total Electron Content (TEC) for high precision differential GNSS applications. As the ionosphere is a highly dynamic medium, we believe that to have a reliable procedure it is necessary to transfer

  1. GSTARS computer models and their applications, Part II: Applications

    Science.gov (United States)

    Simoes, F.J.M.; Yang, C.T.

    2008-01-01

    In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  2. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  3. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  4. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  5. Applicability of DFT model in reactive distillation

    Science.gov (United States)

    Staszak, Maciej

    2017-11-01

    The density functional theory (DFT) applicability to reactive distillation is discussed. Brief modeling techniques description of distillation and rectification with chemical reaction is provided as a background for quantum method usage description. The equilibrium and nonequilibrium distillation models are described for that purpose. The DFT quantum theory is concisely described. The usage of DFT in the modeling of reactive distillation is described in two parts. One of the fundamental and very important component of distillation modeling is vapor-liquid equilibrium description for which the DFT quantum approach can be used. The representative DFT models, namely COSMO-RS (Conductor like Screening Model for Real Solvents), COSMOSPACE (COSMO Surface Pair Activity Coefficient) and COSMO-SAC (SAC - segment activity coefficient) approaches are described. The second part treats the way in which the chemical reaction is described by means of quantum DFT method. The intrinsic reaction coordinate (IRC) method is described which is used to find minimum energy path of substrates to products transition. The DFT is one of the methods which can be used for that purpose. The literature data examples are provided which proves that IRC method is applicable for chemical reaction kinetics description.

  6. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  7. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  8. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  9. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  10. The concept exploration model and an application

    Science.gov (United States)

    Zhang, Yin; Gao, Kening; Zhang, Bin

    2015-03-01

    For a user who is unfamiliar with a target domain, the first step to conduct an exploratory search task is to go over a learning phrase, which means to learn from the search results to acquire basic domain knowledge. Since lots of search results could be returned by a search engine, and usually only a small portion of all the results contain valuable knowledge to the current search task, the user usually needs to read lots of documents and could only learn limited knowledge. This makes the learning phrase a low efficiency, time consuming and easy to fail process. In order to support the learning phrase of the exploratory search process, this paper proposes the concept exploration model which describes how a user reads search results and figures out interesting concepts. The model focuses on how does a user explore related concepts during the learning phrase, and factorizes the concept exploration process as a production of the probability that concepts form a specific relation structure, and the probability that a user is attracted by a concept. In an application example, the concept exploration model is used in a query recommendation task to support exploratory search. We demonstrate how to determine the two probabilistic factors and evaluate the model with a set of metrics. The experiment results show that the application example could help users explore domain concepts more effectively.

  11. Recent developments in volatility modeling and applications

    Directory of Open Access Journals (Sweden)

    A. Thavaneswaran

    2006-01-01

    Full Text Available In financial modeling, it has been constantly pointed out that volatility clustering and conditional nonnormality induced leptokurtosis observed in high frequency data. Financial time series data are not adequately modeled by normal distribution, and empirical evidence on the non-normality assumption is well documented in the financial literature (details are illustrated by Engle (1982 and Bollerslev (1986. An ARMA representation has been used by Thavaneswaran et al., in 2005, to derive the kurtosis of the various class of GARCH models such as power GARCH, non-Gaussian GARCH, nonstationary and random coefficient GARCH. Several empirical studies have shown that mixture distributions are more likely to capture heteroskedasticity observed in high frequency data than normal distribution. In this paper, some results on moment properties are generalized to stationary ARMA process with GARCH errors. Application to volatility forecasts and option pricing are also discussed in some detail.

  12. Determining Application Runtimes Using Queueing Network Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Michael L. [Univ. of San Francisco, CA (United States)

    2006-12-14

    Determination of application times-to-solution for large-scale clustered computers continues to be a difficult problem in high-end computing, which will only become more challenging as multi-core consumer machines become more prevalent in the market. Both researchers and consumers of these multi-core systems desire reasonable estimates of how long their programs will take to run (time-to-solution, or TTS), and how many resources will be consumed in the execution. Currently there are few methods of determining these values, and those that do exist are either overly simplistic in their assumptions or require great amounts of effort to parameterize and understand. One previously untried method is queuing network modeling (QNM), which is easy to parameterize and solve, and produces results that typically fall within 10 to 30% of the actual TTS for our test cases. Using characteristics of the computer network (bandwidth, latency) and communication patterns (number of messages, message length, time spent in communication), the QNM model of the NAS-PB CG application was applied to MCR and ALC, supercomputers at LLNL, and the Keck Cluster at USF, with average errors of 2.41%, 3.61%, and -10.73%, respectively, compared to the actual TTS observed. While additional work is necessary to improve the predictive capabilities of QNM, current results show that QNM has a great deal of promise for determining application TTS for multi-processor computer systems.

  13. Modelling of Tape Casting for Ceramic Applications

    DEFF Research Database (Denmark)

    Jabbari, Masoud

    of functional ceramics research. Advances in ceramic forming have enabled low cost shaping techniques such as tape casting and extrusion to be used in some of the most challenging technologies. These advances allow the design of complex components adapted to desired specific properties and applications. However...... of ceramic processing are generally focused on the control of the microstructure while the importance of shaping is often underestimated. Improved performance requires the design and shaping of both controlled architectures and microstructures. Novel functionally graded ceramic materials may be formed...... process of functionally graded ceramic materials for fuel cell applications as well as magnetic refrigeration. Models to simulate the shaping of monolayer/multilayer and graded materials by tape casting are developed. The emphasis is on analyzing the entry flow of multiple slurries from the reservoir...

  14. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  15. Modeling molecular recognition: theory and application.

    Science.gov (United States)

    Mardis, K; Luo, R; David, L; Potter, M; Glemza, A; Payne, G; Gilson, M K

    2000-01-01

    Abstract Efficient, reliable methods for calculating the binding affinities of noncovalent complexes would allow advances in a variety of areas such as drug discovery and separation science. We have recently described a method that accommodates significant physical detail while remaining fast enough for use in molecular design. This approach uses the predominant states method to compute free energies, an empirical force field, and an implicit solvation model based upon continuum electrostatics. We review applications of this method to systems ranging from small molecules to protein-ligand complexes.

  16. Hydrodynamic Modeling and Its Application in AUC.

    Science.gov (United States)

    Rocco, Mattia; Byron, Olwyn

    2015-01-01

    The hydrodynamic parameters measured in an AUC experiment, s(20,w) and D(t)(20,w)(0), can be used to gain information on the solution structure of (bio)macromolecules and their assemblies. This entails comparing the measured parameters with those that can be computed from usually "dry" structures by "hydrodynamic modeling." In this chapter, we will first briefly put hydrodynamic modeling in perspective and present the basic physics behind it as implemented in the most commonly used methods. The important "hydration" issue is also touched upon, and the distinction between rigid bodies versus those for which flexibility must be considered in the modeling process is then made. The available hydrodynamic modeling/computation programs, HYDROPRO, BEST, SoMo, AtoB, and Zeno, the latter four all implemented within the US-SOMO suite, are described and their performance evaluated. Finally, some literature examples are presented to illustrate the potential applications of hydrodynamics in the expanding field of multiresolution modeling. © 2015 Elsevier Inc. All rights reserved.

  17. Agent Based Modeling Applications for Geosciences

    Science.gov (United States)

    Stein, J. S.

    2004-12-01

    Agent-based modeling techniques have successfully been applied to systems in which complex behaviors or outcomes arise from varied interactions between individuals in the system. Each individual interacts with its environment, as well as with other individuals, by following a set of relatively simple rules. Traditionally this "bottom-up" modeling approach has been applied to problems in the fields of economics and sociology, but more recently has been introduced to various disciplines in the geosciences. This technique can help explain the origin of complex processes from a relatively simple set of rules, incorporate large and detailed datasets when they exist, and simulate the effects of extreme events on system-wide behavior. Some of the challenges associated with this modeling method include: significant computational requirements in order to keep track of thousands to millions of agents, methods and strategies of model validation are lacking, as is a formal methodology for evaluating model uncertainty. Challenges specific to the geosciences, include how to define agents that control water, contaminant fluxes, climate forcing and other physical processes and how to link these "geo-agents" into larger agent-based simulations that include social systems such as demographics economics and regulations. Effective management of limited natural resources (such as water, hydrocarbons, or land) requires an understanding of what factors influence the demand for these resources on a regional and temporal scale. Agent-based models can be used to simulate this demand across a variety of sectors under a range of conditions and determine effective and robust management policies and monitoring strategies. The recent focus on the role of biological processes in the geosciences is another example of an area that could benefit from agent-based applications. A typical approach to modeling the effect of biological processes in geologic media has been to represent these processes in

  18. Parsimonious PARMA Models and Their Application to Modeling of Riverflows

    Science.gov (United States)

    Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.

    2004-12-01

    For analysis and design of water resources systems, it is sometimes required to synthetically generate riverflow data with high resolution (that is, weekly or daily values). Periodic AutoRegressive Moving Average models provides a powerful tool for modeling such riverflow time series, which are often periodically stationary. The innovations algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Fitting the PARMA model to historical weekly or daily data, however, requires estimation of too many parameters, which violates the principle of parsimony. In an effort to obtain a parsimonious model representing periodically stationary series, we develop the asymptotic distribution of the discrete Fourier transform of the innovation estimates and then determine those statistically significant Fourier coefficients. We also extend these results to other periodic model parameters. We demonstrate the effectiveness of the technique using simulated data from different PARMA models. An application of the technique is demonstrated through the analysis of a daily riverflow series for the Fraser River in British Columbia.

  19. Generalized data stacking programming model with applications

    Directory of Open Access Journals (Sweden)

    Hala Samir Elhadidy

    2016-09-01

    Full Text Available Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identification technique are proposed to extract the different layers between images and identify the stack class the object follows; respectively. The general multi-stacking network is presented including the interaction between various stack-based layering of some applications. The experiments prove that the concept of stack matrix gives average accuracy of 99.45%.

  20. Genetic model compensation: Theory and applications

    Science.gov (United States)

    Cruickshank, David Raymond

    1998-12-01

    The adaptive filtering algorithm known as Genetic Model Compensation (GMC) was originally presented in the author's Master's Thesis. The current work extends this earlier work. GMC uses a genetic algorithm to optimize filter process noise parameters in parallel with the estimation of the state and based only on the observational information available to the filter. The original stochastic state model underlying GMC was inherited from the antecedent, non-adaptive Dynamic Model Compensation (DMC) algorithm. The current work develops the stochastic state model from a linear system viewpoint, avoiding the simplifications and approximations of the earlier development, and establishes Riemann sums as unbiased estimators of the stochastic integrals which describe the evolution of the random state components. These are significant developments which provide GMC with a solid theoretical foundation. Orbit determination is the area of application in this work, and two types of problems are studied: real-time autonomous filtering using absolute GPS measurements and precise post-processed filtering using differential GPS measurements. The first type is studied in a satellite navigation simulation in which pseudorange and pseudorange rate measurements are processed by an Extended Kalman Filter which incorporates both DMC and GMC. Both estimators are initialized by a geometric point solution algorithm. Using measurements corrupted by simulated Selective Availability errors, GMC reduces mean RSS position error by 6.4 percent, reduces mean clock bias error by 46 percent, and displays a marked improvement in covariance consistency relative to DMC. To study the second type of problem, GMC is integrated with NASA Jet Propulsion Laboratory's Gipsy/Oasis-II (GOA-II) precision orbit determination program creating an adaptive version of GOA-II's Reduced Dynamic Tracking (RDT) process noise formulation. When run as a sequential estimator with GPS measurements from the TOPEX satellite and

  1. Do Network Models Just Model Networks? On The Applicability of Network-Oriented Modeling

    NARCIS (Netherlands)

    Treur, J.; Shmueli, Erez

    2017-01-01

    In this paper for a Network-Oriented Modelling perspective based on temporal-causal networks it is analysed how generic and applicable it is as a general modelling approach and as a computational paradigm. This results in an answer to the question in the title different from: network models just

  2. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  3. Computational nanotechnology modeling and applications with MATLAB

    National Research Council Canada - National Science Library

    Musa, Sarhan M

    2012-01-01

    .... Offering thought-provoking perspective on the developments that are poised to revolutionize the field, the author explores both existing and future nanotechnology applications, which hold great...

  4. Model-driven semantic integration of service-oriented applications

    NARCIS (Netherlands)

    Pokraev, Stanislav Vassilev

    2009-01-01

    The integration of enterprise applications is an extremely complex problem since the most applications have not been designed to work with other applications. That is, they have different information models, do not share common state, and do not consult each other when updating their states.

  5. An Open Simulation System Model for Scientific Applications

    Science.gov (United States)

    Williams, Anthony D.

    1995-01-01

    A model for a generic and open environment for running multi-code or multi-application simulations - called the open Simulation System Model (OSSM) - is proposed and defined. This model attempts to meet the requirements of complex systems like the Numerical Propulsion Simulator System (NPSS). OSSM places no restrictions on the types of applications that can be integrated at any state of its evolution. This includes applications of different disciplines, fidelities, etc. An implementation strategy is proposed that starts with a basic prototype, and evolves over time to accommodate an increasing number of applications. Potential (standard) software is also identified which may aid in the design and implementation of the system.

  6. Ground water modeling applications using the analytic element method.

    Science.gov (United States)

    Hunt, Randall J

    2006-01-01

    Though powerful and easy to use, applications of the analytic element method are not as widespread as finite-difference or finite-element models due in part to their relative youth. Although reviews that focus primarily on the mathematical development of the method have appeared in the literature, a systematic review of applications of the method is not available. An overview of the general types of applications of analytic elements in ground water modeling is provided in this paper. While not fully encompassing, the applications described here cover areas where the method has been historically applied (regional, two-dimensional steady-state models, analyses of ground water-surface water interaction, quick analyses and screening models, wellhead protection studies) as well as more recent applications (grid sensitivity analyses, estimating effective conductivity and dispersion in highly heterogeneous systems). The review of applications also illustrates areas where more method development is needed (three-dimensional and transient simulations).

  7. Applications of maintenance optimisation models: a review and analysis

    NARCIS (Netherlands)

    R. Dekker (Rommert)

    1996-01-01

    textabstractIn this paper we give an overview of applications of maintenance optimization models published so far. We analyze the role of these models in maintenance and discuss the factors which may have hampered applications. Finally, we discuss future prospects.

  8. Photonic crystal fiber modelling and applications

    DEFF Research Database (Denmark)

    Bjarklev, Anders Overgaard; Broeng, Jes; Libori, Stig E. Barkou

    2001-01-01

    Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented.......Photonic crystal fibers having a microstructured air-silica cross section offer new optical properties compared to conventional fibers for telecommunication, sensor, and other applications. Recent advances within research and development of these fibers are presented....

  9. Interconnected hydro-thermal systems - Models, methods, and applications

    DEFF Research Database (Denmark)

    Hindsberger, Magnus

    2003-01-01

    , it has been analysed how the Balmorel model can be used to create inputs related to transmissions and/or prices to a more detailed production scheduling model covering a subsystem of the one represented in the Balmorel model. As an example of application of the Balmorel model, the dissertation presents...

  10. Surface Flux Modeling for Air Quality Applications

    Directory of Open Access Journals (Sweden)

    Limei Ran

    2011-08-01

    Full Text Available For many gasses and aerosols, dry deposition is an important sink of atmospheric mass. Dry deposition fluxes are also important sources of pollutants to terrestrial and aquatic ecosystems. The surface fluxes of some gases, such as ammonia, mercury, and certain volatile organic compounds, can be upward into the air as well as downward to the surface and therefore should be modeled as bi-directional fluxes. Model parameterizations of dry deposition in air quality models have been represented by simple electrical resistance analogs for almost 30 years. Uncertainties in surface flux modeling in global to mesoscale models are being slowly reduced as more field measurements provide constraints on parameterizations. However, at the same time, more chemical species are being added to surface flux models as air quality models are expanded to include more complex chemistry and are being applied to a wider array of environmental issues. Since surface flux measurements of many of these chemicals are still lacking, resistances are usually parameterized using simple scaling by water or lipid solubility and reactivity. Advances in recent years have included bi-directional flux algorithms that require a shift from pre-computation of deposition velocities to fully integrated surface flux calculations within air quality models. Improved modeling of the stomatal component of chemical surface fluxes has resulted from improved evapotranspiration modeling in land surface models and closer integration between meteorology and air quality models. Satellite-derived land use characterization and vegetation products and indices are improving model representation of spatial and temporal variations in surface flux processes. This review describes the current state of chemical dry deposition modeling, recent progress in bi-directional flux modeling, synergistic model development research with field measurements, and coupling with meteorological land surface models.

  11. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  12. Part 7: Application of the IAWQ model

    African Journals Online (AJOL)

    drinie

    2 is a kinetic-based model and incorporates two simple processes for chemical precipitation ... Kinetic (rate) constant for precipitation in IAWQ model. kDIS. Kinetic (rate) constant for redissolution in IAWQ model. Me. General symbol for metal trivalent ions ... apart from influent ammonia (which must be specified), the influent.

  13. Pinna Model for Hearing Instrument Applications

    DEFF Research Database (Denmark)

    Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren Helstrup; Thaysen, Jesper

    2014-01-01

    A novel model of the pinna (outer ear) is presented. This is to increase the understanding of the effect of the pinna on the on-body radiation pattern of an antenna placed inside the ear. Simulations of the model and of a realistically shaped ear are compared to validate the model. The radiation ...

  14. Mixture Modeling: Applications in Educational Psychology

    Science.gov (United States)

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  15. Remote sensing applications in hydrological modeling

    Science.gov (United States)

    Whitelaw, Alan S.; Howes, Sally; Fletcher, Peter; Rast, Michael

    1995-01-01

    Hydrological modeling is important for a wide range of operational forecasting activities in water resource management. The aim of this paper is to assess the capabilities of Earth observation sensors in relation to modeling data requirements in order to identify future areas of development in both model and sensor specifications. Models range from simple black boxes to distributed physically based models. There is significant variation in the data required and the ways in which these data are used. This range of requirements is compared with the capabilities of existing Earth observation sensors in order to define the current situation. Progress requires the coordinated development of both the sensors and the models, together with a greater understanding of the relationship between measurement and process scales. As a result, existing obstacles to progress in both areas are reviewed with the aid of specific case studies. This analysis leads to a set of recommendations on how to develop the use of sensor data in models.

  16. Nonlinear dynamics new directions models and applications

    CERN Document Server

    Ugalde, Edgardo

    2015-01-01

    This book, along with its companion volume, Nonlinear Dynamics New Directions: Theoretical Aspects, covers topics ranging from fractal analysis to very specific applications of the theory of dynamical systems to biology. This second volume contains mostly new applications of the theory of dynamical systems to both engineering and biology. The first volume is devoted to fundamental aspects and includes a number of important new contributions as well as some review articles that emphasize new development prospects. The topics addressed in the two volumes include a rigorous treatment of fluctuations in dynamical systems, topics in fractal analysis, studies of the transient dynamics in biological networks, synchronization in lasers, and control of chaotic systems, among others. This book also: ·         Develops applications of nonlinear dynamics on a diversity of topics such as patterns of synchrony in neuronal networks, laser synchronization, control of chaotic systems, and the study of transient dynam...

  17. New Applications for the Jacchia 77 Model

    Science.gov (United States)

    Wise, J. O.; Burke, W. J.

    2008-12-01

    We examine the Jacchia 77 model and compare model densities to the 2001-2005 densities derived from the CHAMP and GRACE accelerometer data. Of particular interest is the model's unique formulation of exospheric temperature directly from the solar flux (F10) as opposed to a nighttime minimum temperature in the earlier Jacchia 70 and Jacchia 71 models. We compare this calculation directly to average global exospheric temperatures derived from the CHAMP and GRACE accelerometer neutral density data using the hydrostatic equation. The average global exospheric temperature is important because the model density profiles are all derived from this quantity. The Jacchia 77 model includes a special 81-day weight averaged F10 as a model proxy. This approach uses the F10 from the last three solar rotations instead of a centered F81 index, which means the model can be used in real time by using an 81-day weighted "boxcar" index. With new solar proxies for EUV and Mg recently introduced, we discuss the possibility of incorporating these indices in a similar manner. Because the drivers of thermospheric density-- the semiannual variation, solar EUV and solar wind are treated as separate modules in the model-- we examine the strengths and weaknesses of each one as consideration for future model upgrades.

  18. Modeling Perceived Quality for Imaging Applications

    NARCIS (Netherlands)

    Liu, H.

    2011-01-01

    People of all generations are making more and more use of digital imaging systems in their daily lives. The image content rendered by these digital imaging systems largely differs in perceived quality depending on the system and its applications. To be able to optimize the experience of viewers of

  19. Nuclear reaction modeling, verification experiments, and applications

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  20. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  1. Sparse Multivariate Modeling: Priors and Applications

    DEFF Research Database (Denmark)

    Henao, Ricardo

    to use them as hypothesis generating tools. All of our models start from a family of structures, for instance factor models, directed acyclic graphs, classifiers, etc. Then we let them be selectively sparse as a way to provide them with structural fl exibility and interpretability. Finally, we complement...... modeling, a model for peptide-protein/protein-protein interactions called latent protein tree, a framework for sparse Gaussian process classification based on active set selection and a linear multi-category sparse classifier specially targeted to gene expression data. The thesis is organized to provide...

  2. Application of lumped-parameter models

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Liingaard, Morten

    This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1.1). Subse......This technical report concerns the lumped-parameter models for a suction caisson with a ratio between skirt length and foundation diameter equal to 1/2, embedded into an viscoelastic soil. The models are presented for three different values of the shear modulus of the subsoil (section 1...

  3. Optical Coherence Tomography: Modeling and Applications

    DEFF Research Database (Denmark)

    Thrane, Lars

    An analytical model is presented that is able to describe the performance of OCT systems in both the single and multiple scattering regimes simultaneously. This model inherently includes the shower curtain effect, well-known for light propagation through the atmosphere. This effect has been omitted...... in previous theoretical models of OCT systems. It is demonstrated that the shower curtain effect is of utmost importance in the theoretical description of an OCT system. The analytical model, together with proper noise analysis of the OCT system, enables calculation of the SNR, where the optical properties...

  4. Mobile Application Identification based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Yang Xinyan

    2018-01-01

    Full Text Available With the increasing number of mobile applications, there has more challenging network management tasks to resolve. Users also face security issues of the mobile Internet application when enjoying the mobile network resources. Identifying applications that correspond to network traffic can help network operators effectively perform network management. The existing mobile application recognition technology presents new challenges in extensibility and applications with encryption protocols. For the existing mobile application recognition technology, there are two problems, they can not recognize the application which using the encryption protocol and their scalability is poor. In this paper, a mobile application identification method based on Hidden Markov Model(HMM is proposed to extract the defined statistical characteristics from different network flows generated when each application starting. According to the time information of different network flows to get the corresponding time series, and then for each application to be identified separately to establish the corresponding HMM model. Then, we use 10 common applications to test the method proposed in this paper. The test results show that the mobile application recognition method proposed in this paper has a high accuracy and good generalization ability.

  5. Asteroid thermal modeling: recent developments and applications

    NARCIS (Netherlands)

    Harris, A. W.; Mueller, M.

    2006-01-01

    A variety of thermal models are used for the derivation of asteroid physical parameters from thermal-infrared observations Simple models based on spherical geometry are often adequate for obtaining sizes and albedos when very little information about an object is available However sophisticated

  6. Human hand modelling : Kinematics, dynamics, applications

    NARCIS (Netherlands)

    Gustus, A.; Stillfried, G.; Visser, J.; Jörntell, H.; Van der Smagt, P.

    2012-01-01

    An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand

  7. Model Driven Architecture - Foundations and Applications

    NARCIS (Netherlands)

    Rensink, Arend; Warmer, J.

    Model-Driven Architecture, including model-driven approaches in general, holds the big promise of moving software development towards a higher level of abstraction. Given the challenges in the software industry of delivering more complex functionality with less effort, I am convinced that it isn’t a

  8. Applications of Molecular and Materials Modeling

    Science.gov (United States)

    2002-01-01

    Modeling atmospheric chemistry, spectroscopy, adsorption Prof. Marco Antonio Chaer Nascimento http://www.iq.ufrj.br/~chaer/ University of São Paulo ...Jr. (Alkire 1996) was the champion for the start of molecular modeling at Amoco, aided by Joseph F. Gentile (Manager, Information and Computer

  9. Development and application of air quality models at the US ...

    Science.gov (United States)

    Overview of the development and application of air quality models at the U.S. EPA, particularly focused on the development and application of the Community Multiscale Air Quality (CMAQ) model developed within the Computation Exposure Division (CED) of the National Exposure Research Laboratory (NERL). This presentation will provide a simple overview of air quality model development and application geared toward a non-technical student audience. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  10. Using Model Checking to Generate Test Cases for Android Applications

    Directory of Open Access Journals (Sweden)

    Ana Rosario Espada

    2015-04-01

    Full Text Available The behavior of mobile devices is highly non deterministic and barely predictable due to the interaction of the user with its applications. In consequence, analyzing the correctness of applications running on a smartphone involves dealing with the complexity of its environment. In this paper, we propose the use of model-based testing to describe the potential behaviors of users interacting with mobile applications. These behaviors are modeled by composing specially-designed state machines. These composed state machines can be exhaustively explored using a model checking tool to automatically generate all possible user interactions. Each generated trace model checker can be interpreted as a test case to drive a runtime analysis of actual applications. We have implemented a tool that follows the proposed methodology to analyze Android devices using the model checker Spin as the exhaustive generator of test cases.

  11. Fuzzy modeling and control theory and applications

    CERN Document Server

    Matía, Fernando; Jiménez, Emilio

    2014-01-01

    Much work on fuzzy control, covering research, development and applications, has been developed in Europe since the 90's. Nevertheless, the existing books in the field are compilations of articles without interconnection or logical structure or they express the personal point of view of the author. This book compiles the developments of researchers with demonstrated experience in the field of fuzzy control following a logic structure and a unified the style. The first chapters of the book are dedicated to the introduction of the main fuzzy logic techniques, where the following chapters focus on concrete applications. This book is supported by the EUSFLAT and CEA-IFAC societies, which include a large number of researchers in the field of fuzzy logic and control. The central topic of the book, Fuzzy Control, is one of the main research and development lines covered by these associations.

  12. Digital provenance - models, systems, and applications

    OpenAIRE

    Sultana, Salmin

    2014-01-01

    Data provenance refers to the history of creation and manipulation of a data object and is being widely used in various application domains including scientific experiments, grid computing, file and storage system, streaming data etc. However, existing provenance systems operate at a single layer of abstraction (workflow/process/OS) at which they record and store provenance whereas the provenance captured from different layers provide the highest benefit when integrated through a unified prov...

  13. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  14. Multilevel Modeling: A Review of Methodological Issues and Applications

    Science.gov (United States)

    Dedrick, Robert F.; Ferron, John M.; Hess, Melinda R.; Hogarty, Kristine Y.; Kromrey, Jeffrey D.; Lang, Thomas R.; Niles, John D.; Lee, Reginald S.

    2009-01-01

    This study analyzed the reporting of multilevel modeling applications of a sample of 99 articles from 13 peer-reviewed journals in education and the social sciences. A checklist, derived from the methodological literature on multilevel modeling and focusing on the issues of model development and specification, data considerations, estimation, and…

  15. Crop model usefulness in drylands of southern Africa: an application ...

    African Journals Online (AJOL)

    Crop models are useful tools for simulating impacts of climate and agricultural practices on crops. Models have to demonstrate the ability to simulate actual crop growth response in particular environments before application. Data limitations in southern Africa frequently hinder adequate assessment of crop models before ...

  16. Four Applications of the TIGRIS Model in the Netherlands

    NARCIS (Netherlands)

    Eradus, P.; Schoenmakers, A.; van der Hoorn, A.I.J.M.

    2002-01-01

    This paper presents the land-use transportation interaction model TIGRIS for the Netherlands. Four studies have been conducted in the past few years using increasingly sophisticated versions of the model. The paper places the model applications in their geographical context, provides an overview of

  17. Models in Science Education: Applications of Models in Learning and Teaching Science

    Science.gov (United States)

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  18. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  19. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  20. Mathematical modeling and applications in nonlinear dynamics

    CERN Document Server

    Merdan, Hüseyin

    2016-01-01

    The book covers nonlinear physical problems and mathematical modeling, including molecular biology, genetics, neurosciences, artificial intelligence with classical problems in mechanics and astronomy and physics. The chapters present nonlinear mathematical modeling in life science and physics through nonlinear differential equations, nonlinear discrete equations and hybrid equations. Such modeling can be effectively applied to the wide spectrum of nonlinear physical problems, including the KAM (Kolmogorov-Arnold-Moser (KAM)) theory, singular differential equations, impulsive dichotomous linear systems, analytical bifurcation trees of periodic motions, and almost or pseudo- almost periodic solutions in nonlinear dynamical systems. Provides methods for mathematical models with switching, thresholds, and impulses, each of particular importance for discontinuous processes Includes qualitative analysis of behaviors on Tumor-Immune Systems and methods of analysis for DNA, neural networks and epidemiology Introduces...

  1. The DES-Model and Its Applications

    DEFF Research Database (Denmark)

    Grohnheit, Poul Erik

    This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular the e...... systems, assessment of nuclear power, and effects of changes in the energy supply system on the emissions of SO2 and N0X....... the efficiencies and marginal costs of combined heat and power (CHP). Some associated models are briefly outlined, and the use of the model is described by examples concerning scenarios for the primary energy requirements and energy system costs up to the year 2000, planned development of the power and heating......This report describes the use of the Danish Energy System (DES) Model, which has been used for several years as the most comprehensive model for the energy planning. The structure of the Danish energy system is described, and a number of energy system parameters are explained, in particular...

  2. HTGR Application Economic Model Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    A.M. Gandrik

    2012-01-01

    The High Temperature Gas-Cooled Reactor (HTGR) Application Economic Model was developed at the Idaho National Laboratory for the Next Generation Nuclear Plant Project. The HTGR Application Economic Model calculates either the required selling price of power and/or heat for a given internal rate of return (IRR) or the IRR for power and/or heat being sold at the market price. The user can generate these economic results for a range of reactor outlet temperatures; with and without power cycles, including either a Brayton or Rankine cycle; for the demonstration plant, first of a kind, or nth of a kind project phases; for up to 16 reactor modules; and for module ratings of 200, 350, or 600 MWt. This users manual contains the mathematical models and operating instructions for the HTGR Application Economic Model. Instructions, screenshots, and examples are provided to guide the user through the HTGR Application Economic Model. This model was designed for users who are familiar with the HTGR design and Excel and engineering economics. Modification of the HTGR Application Economic Model should only be performed by users familiar with the HTGR and its applications, Excel, and Visual Basic.

  3. Application of Simple CFD Models in Smoke Ventilation Design

    DEFF Research Database (Denmark)

    Brohus, Henrik; Nielsen, Peter Vilhelm; la Cour-Harbo, Hans

    2004-01-01

    is used for the examination. The CFD model is compared with benchmark tests and results from a special application fire simulation CFD code. Apart from benchmark tests two practical applications are examined in shape of modelling a fire in a theatre and a double façade, respectively. The simple CFD model...... uses a standard k-ε turbulence model. Simulations comprise both steady-state and dynamic approaches. Several boundary conditions are tested. Finally, the paper discusses the prospects of simple CFD models in smoke ventilation design including the inherent limitations.......The paper examines the possibilities of using simple CFD models in practical smoke ventilation design. The aim is to assess if it is possible with a reasonable accuracy to predict the behaviour of smoke transport in case of a fire. A CFD code mainly applicable for “ordinary” ventilation design...

  4. ULF Wave Modeling Challenge -Modeling Results and Application to Observations

    Science.gov (United States)

    Rastaetter, L.; Kuznetsova, M. M.; Claudepierre, S. G.; Guild, T. B.; Hartinger, M.; Welling, D. T.; Glocer, A.; Honkonen, I. J.; Raeder, J.

    2015-12-01

    The GEM Metrics and Validation Focus Group has been conducting an Ultra-Low-Frequency (ULF) wave modeling challenge using monochromatic and white-noise solar wind pressure drivers. Using methodology similar to Claudepierre et al. (2010), MHD simulations performed by the SWMF, OpenGGCM and GUMICS models at the Community Coordinated Modeling Center (CCMC) are presented in comparison to LFM model outputs used in the publication and performed at the CCMC. We discuss the effect of inner (near-Earth) boundary conditions on the model results. Event simulations compared to ground-based and in-situ observations will eventually decide which boundary conditions are most realistic.

  5. Recognizing textual entailment models and applications

    CERN Document Server

    Dagan, Ido; Sammons, Mark

    2013-01-01

    In the last few years, a number of NLP researchers have developed and participated in the task of Recognizing Textual Entailment (RTE). This task encapsulates Natural Language Understanding capabilities within a very simple interface: recognizing when the meaning of a text snippet is contained in the meaning of a second piece of text. This simple abstraction of an exceedingly complex problem has broad appeal partly because it can be conceived also as a component in other NLP applications, from Machine Translation to Semantic Search to Information Extraction. It also avoids commitment to any sp

  6. Modeling colorant leakage techniques: application to endodontics.

    Science.gov (United States)

    Romieu, Olivier J; Zimányi, László; Warszyński, Piotr; Levallois, Bernard; Cuisinier, Frédéric J; de Périère, Dominique Deville; Jacquot, Bruno

    2010-09-01

    Our aim was to improve the comprehension of in vitro tracer leakage studies and to determine in which conditions such studies can be reliable. We aimed to develop different theoretical models to describe either an initially dry or a wet interface (slit) between sealer and dentin. Equations based on physical laws were derived to model theoretically in vitro tracer penetration. For the dry interfaces, atmospheric, hydrostatic, tracer gravimetric, capillary and internal air pressures were considered as the underlying forces that control tracer penetration. For wet interfaces, the laws of diffusion were used to model colorant penetration. In both cases penetration is influenced by the width of the interface and by the size of the colorant. Calculations for dry conditions have shown that penetration is quick, mainly driven by the capillary pressure, and the penetration increases as the width of the interface diminishes. Dentinal tubules and the extent of their interconnection modify the penetration depth. For wet conditions, tracer size is the main factor controlling the penetration length and speed (the bigger the tracer, the slower the penetration). Our model calculations demonstrate that tracer penetration studies have to be performed under strict experimental conditions. Dry and wet interfaces are two extreme cases with very different tracer penetration modes. In vitro colorant penetration tests should be performed in both of these conditions avoiding cases where the slit contains both air and water. Theses models can be adapted to other dental situations as well. Copyright 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  7. A practical guide on DTA model applications for regional planning

    Science.gov (United States)

    2016-06-07

    This document is intended as a guide for use by Metropolitan Planning Organizations (MPO) and other planning agencies that are interested in applying Dynamic Traffic Assignment (DTA) models for planning applications. The objective of this document is...

  8. Risk measurement and risk modelling using applications of Vine copulas

    NARCIS (Netherlands)

    D.E. Allen (David); M.J. McAleer (Michael); A.K. Singh (Abhay)

    2017-01-01

    textabstractThis paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is

  9. Model castings with composite surface layer - application

    Directory of Open Access Journals (Sweden)

    J. Szajnar

    2008-10-01

    Full Text Available The paper presents a method of usable properties of surface layers improvement of cast carbon steel 200–450, by put directly in foundingprocess a composite surface layer on the basis of Fe-Cr-C alloy. Technology of composite surface layer guarantee mainly increase inhardness and aberasive wear resistance of cast steel castings on machine elements. This technology can be competition for generallyapplied welding technology (surfacing by welding and thermal spraying. In range of studies was made cast steel test castings withcomposite surface layer, which usability for industrial applications was estimated by criterion of hardness and aberasive wear resistance of type metal-mineral and quality of joint cast steel – (Fe-Cr-C. Based on conducted studies a thesis, that composite surface layer arise from liquid state, was formulated. Moreover, possible is control of composite layer thickness and its hardness by suitable selection of parameters i.e. thickness of insert, pouring temperature and solidification modulus of casting. Possibility of technology application of composite surface layer in manufacture of cast steel slide bush for combined cutter loader is presented.

  10. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  11. A cutting force model for micromilling applications

    DEFF Research Database (Denmark)

    Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo

    2006-01-01

    In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius.......In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....

  12. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  13. Simulation models generator. Applications in scheduling

    Directory of Open Access Journals (Sweden)

    Omar Danilo Castrillón

    2013-08-01

    Rev.Mate.Teor.Aplic. (ISSN 1409-2433 Vol. 20(2: 231–241, July 2013 generador de modelos de simulacion 233 will, in order to have an approach to reality to evaluate decisions in order to take more assertive. To test prototype was used as the modeling example of a production system with 9 machines and 5 works as a job shop configuration, testing stops processing times and stochastic machine to measure rates of use of machines and time average jobs in the system, as measures of system performance. This test shows the goodness of the prototype, to save the user the simulation model building

  14. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  15. Network models in optimization and their applications in practice

    CERN Document Server

    Glover, Fred; Phillips, Nancy V

    2011-01-01

    Unique in that it focuses on formulation and case studies rather than solutions procedures covering applications for pure, generalized and integer networks, equivalent formulations plus successful techniques of network models. Every chapter contains a simple model which is expanded to handle more complicated developments, a synopsis of existing applications, one or more case studies, at least 20 exercises and invaluable references. An Instructor's Manual presenting detailed solutions to all the problems in the book is available upon request from the Wiley editorial department.

  16. Application Feature Model for Geometrical Specification of Assemblies

    OpenAIRE

    Romero, F.; Rosado, P.; Bruscas, G.M.

    2015-01-01

    The work begins with the description of a Domain Meta-Model for collaborative and integrated product development based on a Feature Model that aggregates all Application Features required to support domain specific reasoning. These Application Features are conceived as an aggregation of several Object Features containing all the knowledge about the structure and geometric interface that are the solution for a certain function. Afterwards, the Specification Feature, as a specialisation of the ...

  17. Handbook of mixed membership models and their applications

    CERN Document Server

    Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E

    2014-01-01

    In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo

  18. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  19. Application of the Social Interaction Model.

    Science.gov (United States)

    Koning, Cyndie; And Others

    1997-01-01

    Assessment of an adolescent's social skills using the Child and Adolescent Social Perception measure provided information about the client's use of nonverbal cues. The Social Interaction Model was used as the framework for determining why his social interactions were unsuccessful and identified areas for intervention. (SK)

  20. Venusian Applications of 3D Convection Modeling

    Science.gov (United States)

    Bonaccorso, Timary Annie

    2011-01-01

    This study models mantle convection on Venus using the 'cubed sphere' code OEDIPUS, which models one-sixth of the planet in spherical geometry. We are attempting to balance internal heating, bottom mantle viscosity, and temperature difference across Venus' mantle, in order to create a realistic model that matches with current planetary observations. We also have begun to run both lower and upper mantle simulations to determine whether layered (as opposed to whole-mantle) convection might produce more efficient heat transfer, as well as to model coronae formation in the upper mantle. Upper mantle simulations are completed using OEDIPUS' Cartesian counterpart, JOCASTA. This summer's central question has been how to define a mantle plume. Traditionally, we have defined a hot plume the region with temperature at or above 40% of the difference between the maximum and horizontally averaged temperature, and a cold plume as the region with 40% of the difference between the minimum and average temperature. For less viscous cases (1020 Pa?s), the plumes generated by that definition lacked vigor, displaying buoyancies 1/100th of those found in previous, higher viscosity simulations (1021 Pa?s). As the mantle plumes with large buoyancy flux are most likely to produce topographic uplift and volcanism, the low viscosity cases' plumes may not produce observable deformation. In an effort to eliminate the smallest plumes, we experimented with different lower bound parameters and temperature percentages.

  1. The application of an empowerment model

    NARCIS (Netherlands)

    Molleman, E; van Delft, B; Slomp, J

    2001-01-01

    In this study we applied an empowerment model that focuses on (a) the need for empowerment in light of organizational strategy, (b) job design issues such as job enlargement and job enrichment that facilitate empowerment, and (c) the abilities, and (d) the attitudes of workers that make empowerment

  2. System identification application using Hammerstein model

    Indian Academy of Sciences (India)

    Saban Ozer

    because of its advanced theoretical background [3–5, 10]. However, many systems in real life have nonlinear beha- viours. Linear methods can be inadequate in ...... [24] Gotmare A, Patidar R and George N V 2015 Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. Expert Syst.

  3. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  4. Hydromechanical modelling with application in sealing for underground waste deposition

    OpenAIRE

    Hasal, M. (Martin); Michalec, Z. (Zdeněk); Blaheta, R. (Radim)

    2015-01-01

    Hydro-mechanical models appear in simulation of many environmental problems related to construction of engineering barriers for contaminant spreading. The presented work aims in modelling bentonite-sand barriers, which can be used for nuclear waste isolation and similar problems. Particularly, we use hydro-mechanical model coupling unsaturated flow and (nonlinear) elasticity, implement such model in COMSOL software and show application in simulation of an infiltration test (2D axisymmetric mo...

  5. Potential model application and planning issues

    Directory of Open Access Journals (Sweden)

    Christiane Weber

    2000-03-01

    Full Text Available Le modèle de potentiel a été et reste un modèle d'interaction spatiale utilisé pour diverses problématiques en sciences humaines, cependant l'utilisation qu'en ont fait Donnay (1997,1995,1994 et Binard (1995 en introduisant des résultats de traitement d'images comme support d'application a ouvert la voie à des applications novatrice par exemple, pour la détermination de la limite urbaine ou des hinterlands locaux. Les articulations possibles entre application du modèle de potentiel en imagerie et utilisation de plans de Système d'Information Géographique ont permis l'évaluation temporelle des tendances de développement urbain (Weber,1998. Reprenant cette idée, l'étude proposée tente d'identifier les formes de développement urbain de la Communauté urbaine de Strasbourg (CUS en tenant compte de l'occupation du sol, des caractéristiques des réseaux de communication, des réglementations urbaines et des contraintes environnementales qui pèsent sur la zone d'étude. L'état initial de l'occupation du sol, obtenu par traitements statistiques, est utilisé comme donnée d'entrée du modèle de potentiel afin d'obtenir des surfaces de potentiel associées à des caractéristiques spatiales spécifiques soit  : l'extension de la forme urbaine, la préservation des zones naturelles ou d'agricultures, ou encore les réglementations. Les résultats sont ensuite combinés et classés. Cette application a été menée pour confronter la méthode au développement réel de la CUS déterminé par une étude diachronique par comparaison d'images satellites (SPOT1986- SPOT1998. Afin de vérifier l'intérêt et la justesse de la méthode les résultats satellites ont été opposés à ceux issus de la classification des surfaces de potentiel. Les zones de développement identifiées en fonction du modèle de potentiel ont été confirmées par les résultats de l'analyse temporelle faite sur les images. Une différenciation de zones en

  6. Adaptive Networks Theory, Models and Applications

    CERN Document Server

    Gross, Thilo

    2009-01-01

    With adaptive, complex networks, the evolution of the network topology and the dynamical processes on the network are equally important and often fundamentally entangled. Recent research has shown that such networks can exhibit a plethora of new phenomena which are ultimately required to describe many real-world networks. Some of those phenomena include robust self-organization towards dynamical criticality, formation of complex global topologies based on simple, local rules, and the spontaneous division of "labor" in which an initially homogenous population of network nodes self-organizes into functionally distinct classes. These are just a few. This book is a state-of-the-art survey of those unique networks. In it, leading researchers set out to define the future scope and direction of some of the most advanced developments in the vast field of complex network science and its applications.

  7. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  8. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  9. [Technique and applications of stereolithographic cranial models].

    Science.gov (United States)

    Wolf, H P; Lindner, A; Millesi, W; Knabl, J; Watzke, I

    1994-01-01

    3-D-modelling of the skull is useful referring to improve diagnostics, for surgical planning and simulation of surgical procedure. Stereolithography is a constructive process producing a model by building it up layer by layer with plastic using an "ultraviolet" laser to catalyse the polymerization of a liquid plastic solution. This fast prototyping derives from using a new interface between a CT-Scanner and a SLA. By avoiding the tool path problems inherent in conventional computerdriven CNC milling machines we succeeded in producing closed cavities and even intraosseous course of vessels and nerves. The compact and smooth surface makes manual postprocessing unnecessary. By using a new software (interpolation +/- 0.25 mm) we could improve the accuracy. At our studies we found a maximum aberration of at least 0.25 mm.

  10. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  11. Application of Digital Terrain Model to volcanology

    Directory of Open Access Journals (Sweden)

    V. Achilli

    2006-06-01

    Full Text Available Three-dimensional reconstruction of the ground surface (Digital Terrain Model, DTM, derived by airborne GPS photogrammetric surveys, is a powerful tool for implementing morphological analysis in remote areas. High accurate 3D models, with submeter elevation accuracy, can be obtained by images acquired at photo scales between 1:5000-1:20000. Multitemporal DTMs acquired periodically over volcanic area allow the monitoring of areas interested by crustal deformations and the evaluation of mass balance when large instability phenomena or lava flows have occurred. The work described the results obtained from the analysis of photogrammetric data collected over the Vulcano Island from 1971 to 2001. The data, processed by means of the Digital Photogrammetry Workstation DPW 770, provided DTM with accuracy ranging between few centimeters to few decimeters depending on the geometric image resolution, terrain configuration and quality of photographs.

  12. A clinical application of the training model.

    Science.gov (United States)

    Gianotti, Patricia

    2010-03-01

    This article offers a perspective and a summary of Jack Danielian's (2010) Horneyan training model, highlighting the benefits of a meta-psychological approach for analysts in training and seasoned practitioners alike. To help illustrate the complexity of Karen Horney's views of character structure and character pathology, this article presents a model that reflects the dynamic tensions at play within individuals with narcissistic issues. It suggests that therapeutic listening can be tracked and that thematic material unfolds in a somewhat predictable, sequential, yet altogether systemic manner. Listening is not just art or intuition, nor is it merely interpretation of content based on a theoretical framework. It represents a way of holding the dialectic tension between conscious and unconscious, syntonic and dystonic. If we can better track these dynamic tensions, we can better anticipate and hopefully avoid clinical ruptures through the acting out of negative transference.

  13. Development and application of earth system models

    Science.gov (United States)

    Prinn, Ronald G.

    2013-02-01

    The global environment is a complex and dynamic system. Earth system modeling is needed to help understand changes in interacting subsystems, elucidate the influence of human activities, and explore possible future changes. Integrated assessment of environment and human development is arguably the most difficult and most important "systems" problem faced. To illustrate this approach, we present results from the integrated global system model (IGSM), which consists of coupled submodels addressing economic development, atmospheric chemistry, climate dynamics, and ecosystem processes. An uncertainty analysis implies that without mitigation policies, the global average surface temperature may rise between 3.5 °C and 7.4 °C from 1981-2000 to 2091-2100 (90% confidence limits). Polar temperatures, absent policy, are projected to rise from about 6.4 °C to 14 °C (90% confidence limits). Similar analysis of four increasingly stringent climate mitigation policy cases involving stabilization of greenhouse gases at various levels indicates that the greatest effect of these policies is to lower the probability of extreme changes. The IGSM is also used to elucidate potential unintended environmental consequences of renewable energy at large scales. There are significant reasons for attention to climate adaptation in addition to climate mitigation that earth system models can help inform. These models can also be applied to evaluate whether "climate engineering" is a viable option or a dangerous diversion. We must prepare young people to address this issue: The problem of preserving a habitable planet will engage present and future generations. Scientists must improve communication if research is to inform the public and policy makers better.

  14. Generalized data stacking programming model with applications

    OpenAIRE

    Hala Samir Elhadidy; Rawya Yehia Rizk; Hassen Taher Dorrah

    2016-01-01

    Recent researches have shown that, everywhere in various sciences the systems are following stacked-based stored change behavior when subjected to events or varying environments “on and above” their normal situations. This paper presents a generalized data stack programming (GDSP) model which is developed to describe the system changes under varying environment. These changes which are captured with different ways such as sensor reading are stored in matrices. Extraction algorithm and identif...

  15. Evacuation Dynamics: Empirical Results, Modeling and Applications

    OpenAIRE

    Schadschneider, Andreas; Klingsch, Wolfram; Kluepfel, Hubert; Kretz, Tobias; Rogsch, Christian; Seyfried, Armin

    2008-01-01

    This extensive review was written for the ``Encyclopedia of Complexity and System Science'' (Springer, 2008) and addresses a broad audience ranging from engineers to applied mathematicians, computer scientists and physicists. It provides an extensive overview of various aspects of pedestrian dynamics, focussing on evacuation processes. First the current status of empirical results is critically reviewed as it forms the basis for the calibration of models needed for quantitative predictions. T...

  16. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  17. Model-Driven Approach for Body Area Network Application Development

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  18. Model-Driven Approach for Body Area Network Application Development.

    Science.gov (United States)

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-05-12

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  19. Model-Driven Approach for Body Area Network Application Development

    Directory of Open Access Journals (Sweden)

    Algimantas Venčkauskas

    2016-05-01

    Full Text Available This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS. We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  20. Applications of GARCH models to energy commodities

    Science.gov (United States)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric

  1. Dimensions for hearing-impaired mobile application usability model

    Science.gov (United States)

    Nathan, Shelena Soosay; Hussain, Azham; Hashim, Nor Laily; Omar, Mohd Adan

    2017-10-01

    This paper discuss on the dimensions that has been derived for the hearing-impaired mobile applications usability model. General usability model consist of general dimension for evaluating mobile application however requirements for the hearing-impaired are overlooked and often scanted. This led towards mobile application developed for the hearing-impaired are left unused. It is also apparent that these usability models do not consider accessibility dimensions according to the requirement of the special users. This complicates the work of usability practitioners as well as academician that practices research usability when application are developed for the specific user needs. To overcome this issue, dimension chosen for the hearing-impaired are ensured to be align with the real need of the hearing-impaired mobile application. Besides literature studies, requirements for the hearing-impaired mobile application have been identified through interview conducted with hearing-impaired mobile application users that were recorded as video outputs and analyzed using Nvivo. Finally total of 6 out of 15 dimensions gathered are chosen for the proposed model and presented.

  2. Molecular modeling and multiscaling issues for electronic material applications

    CERN Document Server

    Iwamoto, Nancy; Yuen, Matthew; Fan, Haibo

    Volume 1 : Molecular Modeling and Multiscaling Issues for Electronic Material Applications provides a snapshot on the progression of molecular modeling in the electronics industry and how molecular modeling is currently being used to understand material performance to solve relevant issues in this field. This book is intended to introduce the reader to the evolving role of molecular modeling, especially seen through the eyes of the IEEE community involved in material modeling for electronic applications.  Part I presents  the role that quantum mechanics can play in performance prediction, such as properties dependent upon electronic structure, but also shows examples how molecular models may be used in performance diagnostics, especially when chemistry is part of the performance issue.  Part II gives examples of large-scale atomistic methods in material failure and shows several examples of transitioning between grain boundary simulations (on the atomistic level)and large-scale models including an example ...

  3. Application of product modelling - seen from a work preparation viewpoint

    DEFF Research Database (Denmark)

    Hvam, Lars

    . The other element covers general techniques for analysing and modeling knowledge and information, with special focus on object oriented modeling. The third element covers four different examples of product models. The product models are viewed as reference models for modeling knowledge and information used......, over building a model, and to the final programming of an application. It has been stressed out to carry out all the phases in the outline of procedure in the empirical work, one of the reasons being to prove that it is possible, with a reasonable consumption of resources, to build an application......Manufacturing companies spends an increasing amount of the total work resources in the manufacturing planning system with the activities of e.g. specifying products and methods, scheduling, procurement etc. By this the potential for obtaining increased productivity moves from the direct costs...

  4. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed...... and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling.Multi level flow modeling (MFM), which is a method for functional modeling,is introduced briefly and illustrated with a cooling system example...

  5. [Watershed water environment pollution models and their applications: a review].

    Science.gov (United States)

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  6. Management Model Applicable to Metallic Materials Industry

    Directory of Open Access Journals (Sweden)

    Adrian Ioana

    2013-02-01

    Full Text Available This paper presents an algorithmic analysis of the marketing mix in metallurgy. It also analyzes the main correlations and their optimizing possibilities through an efficient management. Thus, both the effect and the importance of the marketing mix, for components (the four “P-s” areanalyzed in the materials’ industry, but their correlations as well, with the goal to optimize the specific management. There are briefly presented the main correlations between the 4 marketing mix components (the 4 “P-s” for a product within the materials’ industry, including aspects regarding specific management.Keywords: Management Model, Materials Industry, Marketing Mix, Correlations.

  7. Optimal control application to an Ebola model

    Directory of Open Access Journals (Sweden)

    Ebenezer Bonyah

    2016-04-01

    Full Text Available Ebola virus is a severe, frequently fatal illness, with a case fatality rate up to 90%. The outbreak of the disease has been acknowledged by World Health Organization as Public Health Emergency of International Concern. The threat of Ebola in West Africa is still a major setback to the socioeconomic development. Optimal control theory is applied to a system of ordinary differential equations which is modeling Ebola infection through three different routes including contact between humans and a dead body. In an attempt to reduce infection in susceptible population, a preventive control is put in the form of education and campaign and two treatment controls are applied to infected and late-stage infected (super human population. The Pontryagins maximum principle is employed to characterize optimality control, which is then solved numerically. It is observed that time optimal control is existed in the model. The activation of each control showed a positive reduction of infection. The overall effect of activation of all the controls simultaneously reduced the effort required for the reduction of the infection quickly. The obtained results present a good framework for planning and designing cost-effective strategies for good interventions in dealing with Ebola disease. It is established that in order to reduce Ebola threat all the three controls must be taken into consideration concurrently.

  8. Simulation and Modeling Application in Agricultural Mechanization

    Directory of Open Access Journals (Sweden)

    R. M. Hudzari

    2012-01-01

    Full Text Available This experiment was conducted to determine the equations relating the Hue digital values of the fruits surface of the oil palm with maturity stage of the fruit in plantation. The FFB images were zoomed and captured using Nikon digital camera, and the calculation of Hue was determined using the highest frequency of the value for R, G, and B color components from histogram analysis software. New procedure in monitoring the image pixel value for oil palm fruit color surface in real-time growth maturity was developed. The estimation of day harvesting prediction was calculated based on developed model of relationships for Hue values with mesocarp oil content. The simulation model is regressed and predicts the day of harvesting or a number of days before harvest of FFB. The result from experimenting on mesocarp oil content can be used for real-time oil content determination of MPOB color meter. The graph to determine the day of harvesting the FFB was presented in this research. The oil was found to start developing in mesocarp fruit at 65 days before fruit at ripe maturity stage of 75% oil to dry mesocarp.

  9. Pseudojellium model with an application to lithium clusters

    Science.gov (United States)

    Serra, Ll.; Bachelet, G. B.; van Giai, Nguyen; Lipparini, E.

    1993-11-01

    Ionic pseudo-Hamiltonians, which replace core electrons in atomic calculations, are used to build a jelliumlike model, which describes electronic ground state and excitation properties of atomic clusters. As an application, we successfully describe the plasmon resonance in lithium clusters for which recent rexperimental data have shown the failure of the conventional jellium model.

  10. Application of Multilevel Logistic Model to Identify Correlates of ...

    African Journals Online (AJOL)

    Implementation of multilevel model is becoming a common analytic technique over a wide range of disciplines including social and economic sciences. In this paper, an attempt has been made to assess the application of multilevel logistic model for the purpose of identifying the effect of household characteristics on poverty ...

  11. Teaching Applications of Mathematics: Mathematical Modelling in Science and Technology.

    Science.gov (United States)

    Burghes, D. N.

    1980-01-01

    Discusses how all applications of mathematics have a unifying theme, namely, that of mathematical modeling. Three examples of models in science and technology are briefly described. These are suggested as suitable project work in the classroom, which involve collaborative work between science and mathematics. Advantages and benefits of modeling…

  12. Physics Based Modeling of Helicopter Brownout for Piloted Simulation Applications

    Science.gov (United States)

    2008-12-01

    for over 40 years in many applications including erosion modeling, sand dune formation and nuclear blast waves (l-Iartenbaum, 1971; Mirels , 1984...Environment." American ’Helicopter Society 6I~\\ Annual Forum, Grapevine, TX, June 2005. Mirels , H. (1984). "Blowing Model for Turbolent BoundarywLayer Dust

  13. Methods for eigenvalue problems with applications in model order reduction

    NARCIS (Netherlands)

    Rommes, J.

    2007-01-01

    Physical structures and processes are modeled by dynamical systems in a wide range of application areas. The increasing demand for complex components and large structures, together with an increasing demand for detail and accuracy, makes the models larger and more complicated. To be able to simulate

  14. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  15. Computer-aided modelling template: Concept and application

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    2015-01-01

    decomposition technique which identifies generic steps and workflow involved, the computer-aided template concept has been developed. This concept is implemented as a software tool, which provides a user-friendly interface for following the workflow steps and guidance through the steps providing additional......Modelling is an important enabling technology in modern chemical engineering applications. A template-based approach is presented in this work to facilitate the construction and documentation of the models and enable their maintenance for reuse in a wider application range. Based on a model...

  16. Solutions manual to accompany finite mathematics models and applications

    CERN Document Server

    Morris, Carla C

    2015-01-01

    A solutions manual to accompany Finite Mathematics: Models and Applications In order to emphasize the main concepts of each chapter, Finite Mathematics: Models and Applications features plentiful pedagogical elements throughout such as special exercises, end notes, hints, select solutions, biographies of key mathematicians, boxed key principles, a glossary of important terms and topics, and an overview of use of technology. The book encourages the modeling of linear programs and their solutions and uses common computer software programs such as LINDO. In addition to extensive chapters on pr

  17. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  18. Application of Generic Disposal System Models

    Energy Technology Data Exchange (ETDEWEB)

    Mariner, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Glenn Edward [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sevougian, S. David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.

  19. Plant growth and architectural modelling and its applications

    Science.gov (United States)

    Guo, Yan; Fourcaud, Thierry; Jaeger, Marc; Zhang, Xiaopeng; Li, Baoguo

    2011-01-01

    Over the last decade, a growing number of scientists around the world have invested in research on plant growth and architectural modelling and applications (often abbreviated to plant modelling and applications, PMA). By combining physical and biological processes, spatially explicit models have shown their ability to help in understanding plant–environment interactions. This Special Issue on plant growth modelling presents new information within this topic, which are summarized in this preface. Research results for a variety of plant species growing in the field, in greenhouses and in natural environments are presented. Various models and simulation platforms are developed in this field of research, opening new features to a wider community of researchers and end users. New modelling technologies relating to the structure and function of plant shoots and root systems are explored from the cellular to the whole-plant and plant-community levels. PMID:21638797

  20. Copula bivariate probit models: with an application to medical expenditures

    OpenAIRE

    Winkelmann, Rainer

    2011-01-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the "treatment") on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expen- diture, a model based on the Frank ...

  1. Application of alternating current impedance to fuel cell modeling

    Energy Technology Data Exchange (ETDEWEB)

    Springer, T.E.

    1999-05-02

    AC impedance has provided a useful diagnostic tool in the Los Alamos polymer electrolyte fuel cell (PEFC) program. The author reviews the techniques he has used in ac impedance modeling. These techniques include equation implementation, model simplification and verification, least squares fitting, application of two-dimensional Laplace equation solvers handling complex interfacial boundary conditions, and interpretation of impedance features. The separate features of the complete electrode model are explained by analytic examples.

  2. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  3. Dynamic reactor modeling with applications to SPR and ZEDNA

    Energy Technology Data Exchange (ETDEWEB)

    Suo-Anttila, Ahti Jorma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2011-12-01

    A dynamic reactor model has been developed for pulse-type reactor applications. The model predicts reactor power, axial and radial fuel expansion, prompt and delayed neutron population, and prompt and delayed gamma population. All model predictions are made as a function of time. The model includes the reactivity effect of fuel expansion on a dynamic timescale as a feedback mechanism for reactor power. All inputs to the model are calculated from first principles, either directly by solving systems of equations, or indirectly from Monte Carlo N-Particle Transport Code (MCNP) derived results. The model does not include any empirical parameters that can be adjusted to match experimental data. Comparisons of model predictions to actual Sandia Pulse Reactor SPR-III pulses show very good agreement for a full range of pulse magnitudes. The model is also applied to Z-pinch externally driven neutron assembly (ZEDNA) type reactor designs to model both normal and off-normal ZEDNA operations.

  4. Top-Down Enterprise Application Integration with Reference Models

    Directory of Open Access Journals (Sweden)

    Willem-Jan van den Heuvel

    2000-11-01

    Full Text Available For Enterprise Resource Planning (ERP systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model's specification. The actual linking of reference and legacy models is done with a methodology for connecting (new business objects with (old legacy systems.

  5. Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.

    Science.gov (United States)

    Forstmann, B U; Ratcliff, R; Wagenmakers, E-J

    2016-01-01

    Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.

  6. Solar radiation practical modeling for renewable energy applications

    CERN Document Server

    Myers, Daryl Ronald

    2013-01-01

    Written by a leading scientist with over 35 years of experience working at the National Renewable Energy Laboratory (NREL), Solar Radiation: Practical Modeling for Renewable Energy Applications brings together the most widely used, easily implemented concepts and models for estimating broadband and spectral solar radiation data. The author addresses various technical and practical questions about the accuracy of solar radiation measurements and modeling. While the focus is on engineering models and results, the book does review the fundamentals of solar radiation modeling and solar radiation m

  7. An investigation of modelling and design for software service applications.

    Science.gov (United States)

    Anjum, Maria; Budgen, David

    2017-01-01

    Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the 'design model'. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.

  8. Recent advances and applications of probabilistic topic models

    Science.gov (United States)

    Wood, Ian

    2014-12-01

    I present here an overview of recent advances in probabilistic topic modelling and related Bayesian graphical models as well as some of their more atypical applications outside of their home: text analysis. These techniques allow the modelling of high dimensional count vectors with strong correlations. With such data, simply calculating a correlation matrix is infeasible. Probabilistic topic models address this using mixtures of multinomials estimated via Bayesian inference with Dirichlet priors. The use of conjugate priors allows for efficient inference, and these techniques scale well to data sets with many millions of vectors. The first of these techniques to attract significant attention was Latent Dirichlet Allocation (LDA) [1, 2]. Numerous extensions and adaptations of LDA have been proposed: non-parametric models; assorted models incorporating authors, sentiment and other features; models regularised through the use of extra metadata or extra priors on topic structure, and many more [3]. They have become widely used in the text analysis and population genetics communities, with a number of compelling applications. These techniques are not restricted to text analysis, however, and can be applied to other types of data which can be sensibly discretised and represented as counts of labels/properties/etc. LDA and it's variants have been used to find patterns in data from diverse areas of inquiry, including genetics, plant physiology, image analysis, social network analysis, remote sensing and astrophysics. Nonetheless, it is relatively recently that probabilistic topic models have found applications outside of text analysis, and to date few such applications have been considered. I suggest that there is substantial untapped potential for topic models and models inspired by or incorporating topic models to be fruitfully applied, and outline the characteristics of systems and data for which this may be the case.

  9. Nuclear model developments in FLUKA for present and future applications

    Directory of Open Access Journals (Sweden)

    Cerutti Francesco

    2017-01-01

    Full Text Available The FLUKAS code [1–3] is used in research laboratories all around the world for challenging applications spanning a very wide range of energies, projectiles and targets. FLUKAS is also extensively used for in hadrontherapy research studies and clinical planning systems. In this paper some of the recent developments in the FLUKAS nuclear physics models of relevance for very different application fields including medical physics are presented. A few examples are shown demonstrating the effectiveness of the upgraded code.

  10. Practical applications of probabilistic model checking to communication protocols

    OpenAIRE

    Duflot, M.; Kwiatkowska, M.; Norman, G; Parker, D.; Peyronnet, S.; Picaronny, C.; Sproston, J.

    2012-01-01

    Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate seve...

  11. Nuclear model developments in FLUKA for present and future applications

    Science.gov (United States)

    Cerutti, Francesco; Empl, Anton; Fedynitch, Anatoli; Ferrari, Alfredo; Ruben, GarciaAlia; Sala, Paola R.; Smirnov, George; Vlachoudis, Vasilis

    2017-09-01

    The FLUKAS code [1-3] is used in research laboratories all around the world for challenging applications spanning a very wide range of energies, projectiles and targets. FLUKAS is also extensively used for in hadrontherapy research studies and clinical planning systems. In this paper some of the recent developments in the FLUKAS nuclear physics models of relevance for very different application fields including medical physics are presented. A few examples are shown demonstrating the effectiveness of the upgraded code.

  12. Algebraic Modeling of Topological and Computational Structures and Applications

    CERN Document Server

    Theodorou, Doros; Stefaneas, Petros; Kauffman, Louis

    2017-01-01

    This interdisciplinary book covers a wide range of subjects, from pure mathematics (knots, braids, homotopy theory, number theory) to more applied mathematics (cryptography, algebraic specification of algorithms, dynamical systems) and concrete applications (modeling of polymers and ionic liquids, video, music and medical imaging). The main mathematical focus throughout the book is on algebraic modeling with particular emphasis on braid groups. The research methods include algebraic modeling using topological structures, such as knots, 3-manifolds, classical homotopy groups, and braid groups. The applications address the simulation of polymer chains and ionic liquids, as well as the modeling of natural phenomena via topological surgery. The treatment of computational structures, including finite fields and cryptography, focuses on the development of novel techniques. These techniques can be applied to the design of algebraic specifications for systems modeling and verification. This book is the outcome of a w...

  13. AUTOMOTIVE APPLICATIONS OF EVOLVING TAKAGI-SUGENO-KANG FUZZY MODELS

    Directory of Open Access Journals (Sweden)

    Radu-Emil Precup

    2017-08-01

    Full Text Available This paper presents theoretical and application results concerning the development of evolving Takagi-Sugeno-Kang fuzzy models for two dynamic systems, which will be viewed as controlled processes, in the field of automotive applications. The two dynamic systems models are nonlinear dynamics of the longitudinal slip in the Anti-lock Braking Systems (ABS and the vehicle speed in vehicles with the Continuously Variable Transmission (CVT systems. The evolving Takagi-Sugeno-Kang fuzzy models are obtained as discrete-time fuzzy models by incremental online identification algorithms. The fuzzy models are validated against experimental results in the case of the ABS and the first principles simulation results in the case of the vehicle with the CVT.

  14. Systems modeling and simulation applications for critical care medicine

    Science.gov (United States)

    2012-01-01

    Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718

  15. DATA MODEL CUSTOMIZATION FOR YII BASED ERP APPLICATION

    Directory of Open Access Journals (Sweden)

    Andre Leander

    2014-01-01

    Full Text Available As UD. Logam Utama’s business grow, trigger the need of fast and accurate information in order to improve performance, efficiency, control and company’s values. The company needs a system that can integrate each functional area. ERP has centralized database and able to be configured, according to company’s business processes.First phase of application development is analysis and design the company’s business processes. The design phase produce a number of models that will be used to created application.The final result of application development is an ERP application that can be configured with the company’s business process. The ERP application consist of warehouse or production module, purchasing module, sales module, and accounting module.

  16. Hydraulic modeling development and application in water resources engineering

    Science.gov (United States)

    Simoes, Francisco J.; Yang, Chih Ted; Wang, Lawrence K.

    2015-01-01

    The use of modeling has become widespread in water resources engineering and science to study rivers, lakes, estuaries, and coastal regions. For example, computer models are commonly used to forecast anthropogenic effects on the environment, and to help provide advanced mitigation measures against catastrophic events such as natural and dam-break floods. Linking hydraulic models to vegetation and habitat models has expanded their use in multidisciplinary applications to the riparian corridor. Implementation of these models in software packages on personal desktop computers has made them accessible to the general engineering community, and their use has been popularized by the need of minimal training due to intuitive graphical user interface front ends. Models are, however, complex and nontrivial, to the extent that even common terminology is sometimes ambiguous and often applied incorrectly. In fact, many efforts are currently under way in order to standardize terminology and offer guidelines for good practice, but none has yet reached unanimous acceptance. This chapter provides a view of the elements involved in modeling surface flows for the application in environmental water resources engineering. It presents the concepts and steps necessary for rational model development and use by starting with the exploration of the ideas involved in defining a model. Tangible form of those ideas is provided by the development of a mathematical and corresponding numerical hydraulic model, which is given with a substantial amount of detail. The issues of model deployment in a practical and productive work environment are also addressed. The chapter ends by presenting a few model applications highlighting the need for good quality control in model validation.

  17. Uncertainty, ensembles and air quality dispersion modeling: applications and challenges

    Science.gov (United States)

    Dabberdt, Walter F.; Miller, Erik

    The past two decades have seen significant advances in mesoscale meteorological modeling research and applications, such as the development of sophisticated and now widely used advanced mesoscale prognostic models, large eddy simulation models, four-dimensional data assimilation, adjoint models, adaptive and targeted observational strategies, and ensemble and probabilistic forecasts. Some of these advances are now being applied to urban air quality modeling and applications. Looking forward, it is anticipated that the high-priority air quality issues for the near-to-intermediate future will likely include: (1) routine operational forecasting of adverse air quality episodes; (2) real-time high-level support to emergency response activities; and (3) quantification of model uncertainty. Special attention is focused here on the quantification of model uncertainty through the use of ensemble simulations. Application to emergency-response dispersion modeling is illustrated using an actual event that involved the accidental release of the toxic chemical oleum. Both surface footprints of mass concentration and the associated probability distributions at individual receptors are seen to provide valuable quantitative indicators of the range of expected concentrations and their associated uncertainty.

  18. Mathematical modeling and computational intelligence in engineering applications

    CERN Document Server

    Silva Neto, Antônio José da; Silva, Geraldo Nunes

    2016-01-01

    This book brings together a rich selection of studies in mathematical modeling and computational intelligence, with application in several fields of engineering, like automation, biomedical, chemical, civil, electrical, electronic, geophysical and mechanical engineering, on a multidisciplinary approach. Authors from five countries and 16 different research centers contribute with their expertise in both the fundamentals and real problems applications based upon their strong background on modeling and computational intelligence. The reader will find a wide variety of applications, mathematical and computational tools and original results, all presented with rigorous mathematical procedures. This work is intended for use in graduate courses of engineering, applied mathematics and applied computation where tools as mathematical and computational modeling, numerical methods and computational intelligence are applied to the solution of real problems.

  19. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  20. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  1. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  2. TASS Model Application for Testing the TDWAP Model

    Science.gov (United States)

    Switzer, George F.

    2009-01-01

    One of the operational modes of the Terminal Area Simulation System (TASS) model simulates the three-dimensional interaction of wake vortices within turbulent domains in the presence of thermal stratification. The model allows the investigation of turbulence and stratification on vortex transport and decay. The model simulations for this work all assumed fully-periodic boundary conditions to remove the effects from any surface interaction. During the Base Period of this contract, NWRA completed generation of these datasets but only presented analysis for the neutral stratification runs of that set (Task 3.4.1). Phase 1 work began with the analysis of the remaining stratification datasets, and in the analysis we discovered discrepancies with the vortex time to link predictions. This finding necessitated investigating the source of the anomaly, and we found a problem with the background turbulence. Using the most up to date version TASS with some important defect fixes, we regenerated a larger turbulence domain, and verified the vortex time to link with a few cases before proceeding to regenerate the entire 25 case set (Task 3.4.2). The effort of Phase 2 (Task 3.4.3) concentrated on analysis of several scenarios investigating the effects of closely spaced aircraft. The objective was to quantify the minimum aircraft separations necessary to avoid vortex interactions between neighboring aircraft. The results consist of spreadsheets of wake data and presentation figures prepared for NASA technical exchanges. For these formation cases, NASA carried out the actual TASS simulations and NWRA performed the analysis of the results by making animations, line plots, and other presentation figures. This report contains the description of the work performed during this final phase of the contract, the analysis procedures adopted, and sample plots of the results from the analysis performed.

  3. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  4. Animal models of enterovirus 71 infection: applications and limitations.

    Science.gov (United States)

    Wang, Ya-Fang; Yu, Chun-Keung

    2014-04-17

    Human enterovirus 71 (EV71) has emerged as a neuroinvasive virus that is responsible for several outbreaks in the Asia-Pacific region over the past 15 years. Appropriate animal models are needed to understand EV71 neuropathogenesis better and to facilitate the development of effective vaccines and drugs. Non-human primate models have been used to characterize and evaluate the neurovirulence of EV71 after the early outbreaks in late 1990s. However, these models were not suitable for assessing the neurovirulence level of the virus and were associated with ethical and economic difficulties in terms of broad application. Several strategies have been applied to develop mouse models of EV71 infection, including strategies that employ virus adaption and immunodeficient hosts. Although these mouse models do not closely mimic human disease, they have been applied to determine the pathogenesis of and treatment and prevention of the disease. EV71 receptor-transgenic mouse models have recently been developed and have significantly advanced our understanding of the biological features of the virus and the host-parasite interactions. Overall, each of these models has advantages and disadvantages, and these models are differentially suited for studies of EV71 pathogenesis and/or the pre-clinical testing of antiviral drugs and vaccines. In this paper, we review the characteristics, applications and limitation of these EV71 animal models, including non-human primate and mouse models.

  5. Copula bivariate probit models: with an application to medical expenditures.

    Science.gov (United States)

    Winkelmann, Rainer

    2012-12-01

    The bivariate probit model is frequently used for estimating the effect of an endogenous binary regressor (the 'treatment') on a binary health outcome variable. This paper discusses simple modifications that maintain the probit assumption for the marginal distributions while introducing non-normal dependence using copulas. In an application of the copula bivariate probit model to the effect of insurance status on the absence of ambulatory health care expenditure, a model based on the Frank copula outperforms the standard bivariate probit model. Copyright © 2011 John Wiley & Sons, Ltd.

  6. DYN3D thermal expansion models for SFR applications

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety

    2016-07-01

    The nodal diffusion code DYN3D is under extension for SFR applications. As a part of the extension a new model for axial thermal expansion of fuel rods was developed. The new model provides a flexible way of handling the axial fuel rod expansion, because each sub-assembly and node can be treated independently. The performance of the model was tested on a large oxide SFR core, and the results were compared to the reference full core Serpent solution. The test results indicated that the proposed model can accurately account for the axial expansion effects on full core level.

  7. Handbook of Real-World Applications in Modeling and Simulation

    CERN Document Server

    Sokolowski, John A

    2012-01-01

    This handbook provides a thorough explanation of modeling and simulation in the most useful, current, and predominant applied areas, such as transportation, homeland security, medicine, operational research, military science, and business modeling.  The authors offer a concise look at the key concepts and techniques of modeling and simulation and then discuss how and why the presented domains have become leading applications.  The book begins with an introduction of why modeling and simulation is a reliable analysis assessment tool for complex syste

  8. Complexity, accuracy and practical applicability of different biogeochemical model versions

    Science.gov (United States)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular

  9. A novel kernel regularized nonhomogeneous grey model and its applications

    Science.gov (United States)

    Ma, Xin; Hu, Yi-sheng; Liu, Zhi-bin

    2017-07-01

    The nonhomogeneous grey model (NGM) is a novel tool for time series forecasting, which has attracted considerable interest of research. However, the existing nonhomogeneous grey models may be inefficient to predict the complex nonlinear time series sometimes due to the linearity of the differential or difference equations based on which these models are developed. In order to enhance the accuracy and applicability of the NGM model, the kernel method in the statistical learning theory has been utilized to build a novel kernel regularized nonhomogeneous grey model, which is abbreviated as the KRNGM model. The KRNGM model is represented by a differential equation which contains a nonlinear function of t. By constructing the regularized problem and using the kernel function which satisfies the Mercer's condition, the parameters estimation of KRNGM model only involves in solving a set of linear equations, and the nonlinear function in the KRNGM model can be expressed as a linear combination of the Lagrangian multipliers and the selected kernel function, and then the KRNGM model can be solved numerically. Two case studies of petroleum production forecasting are carried to illustrate the effectiveness of the KRNGM model, comparing to the existing nonhomogeneous models. The results show that the KRNGM model outperforms the existing NGM, ONGM, NDGM model significantly.

  10. Constitutive Modeling of Geomaterials Advances and New Applications

    CERN Document Server

    Zhang, Jian-Min; Zheng, Hong; Yao, Yangping

    2013-01-01

    The Second International Symposium on Constitutive Modeling of Geomaterials: Advances and New Applications (IS-Model 2012), is to be held in Beijing, China, during October 15-16, 2012. The symposium is organized by Tsinghua University, the International Association for Computer Methods and Advances in Geomechanics (IACMAG), the Committee of Numerical and Physical Modeling of Rock Mass, Chinese Society for Rock Mechanics and Engineering, and the Committee of Constitutive Relations and Strength Theory, China Institution of Soil Mechanics and Geotechnical Engineering, China Civil Engineering Society. This Symposium follows the first successful International Workshop on Constitutive Modeling held in Hong Kong, which was organized by Prof. JH Yin in 2007.   Constitutive modeling of geomaterials has been an active research area for a long period of time. Different approaches have been used in the development of various constitutive models. A number of models have been implemented in the numerical analyses of geote...

  11. Mathematical and numerical foundations of turbulence models and applications

    CERN Document Server

    Chacón Rebollo, Tomás

    2014-01-01

    With applications to climate, technology, and industry, the modeling and numerical simulation of turbulent flows are rich with history and modern relevance. The complexity of the problems that arise in the study of turbulence requires tools from various scientific disciplines, including mathematics, physics, engineering, and computer science. Authored by two experts in the area with a long history of collaboration, this monograph provides a current, detailed look at several turbulence models from both the theoretical and numerical perspectives. The k-epsilon, large-eddy simulation, and other models are rigorously derived and their performance is analyzed using benchmark simulations for real-world turbulent flows. Mathematical and Numerical Foundations of Turbulence Models and Applications is an ideal reference for students in applied mathematics and engineering, as well as researchers in mathematical and numerical fluid dynamics. It is also a valuable resource for advanced graduate students in fluid dynamics,...

  12. A Comparison of Three Programming Models for Adaptive Applications

    Science.gov (United States)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  13. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  14. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  15. Application of postured human model for SAR measurements

    Science.gov (United States)

    Vuchkovikj, M.; Munteanu, I.; Weiland, T.

    2013-07-01

    In the last two decades, the increasing number of electronic devices used in day-to-day life led to a growing interest in the study of the electromagnetic field interaction with biological tissues. The design of medical devices and wireless communication devices such as mobile phones benefits a lot from the bio-electromagnetic simulations in which digital human models are used. The digital human models currently available have an upright position which limits the research activities in realistic scenarios, where postured human bodies must be considered. For this reason, a software application called "BodyFlex for CST STUDIO SUITE" was developed. In its current version, this application can deform the voxel-based human model named HUGO (Dipp GmbH, 2010) to allow the generation of common postures that people use in normal life, ensuring the continuity of tissues and conserving the mass to an acceptable level. This paper describes the enhancement of the "BodyFlex" application, which is related to the movements of the forearm and the wrist of a digital human model. One of the electromagnetic applications in which the forearm and the wrist movement of a voxel based human model has a significant meaning is the measurement of the specific absorption rate (SAR) when a model is exposed to a radio frequency electromagnetic field produced by a mobile phone. Current SAR measurements of the exposure from mobile phones are performed with the SAM (Specific Anthropomorphic Mannequin) phantom which is filled with a dispersive but homogeneous material. We are interested what happens with the SAR values if a realistic inhomogeneous human model is used. To this aim, two human models, a homogeneous and an inhomogeneous one, in two simulation scenarios are used, in order to examine and observe the differences in the results for the SAR values.

  16. Applications of spatial statistical network models to stream data

    Science.gov (United States)

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  17. Nonlinear Mathematical Modeling in Pneumatic Servo Position Applications

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Valdiero

    2011-01-01

    Full Text Available This paper addresses a new methodology for servo pneumatic actuators mathematical modeling and selection from the dynamic behavior study in engineering applications. The pneumatic actuator is very common in industrial application because it has the following advantages: its maintenance is easy and simple, with relatively low cost, self-cooling properties, good power density (power/dimension rate, fast acting with high accelerations, and installation flexibility. The proposed fifth-order nonlinear mathematical model represents the main characteristics of this nonlinear dynamic system, as servo valve dead zone, air flow-pressure relationship through valve orifice, air compressibility, and friction effects between contact surfaces in actuator seals. Simulation results show the dynamic performance for different pneumatic cylinders in order to see which features contribute to a better behavior of the system. The knowledge of this behavior allows an appropriate choice of pneumatic actuator, mainly contributing to the success of their precise control in several applications.

  18. Towards Model-Driven Engineering Constraint-Based Scheduling Applications

    OpenAIRE

    de Siqueira Teles, Fabrício

    2008-01-01

    de Siqueira Teles, Fabrício; Pierre Louis Robin, Jacques. Towards Model-Driven Engineering Constraint-Based Scheduling Applications. 2008. Dissertação (Mestrado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2008.

  19. Application of stochastic frontier approach model to assess technical ...

    African Journals Online (AJOL)

    Application of stochastic frontier approach model to assess technical efficiency in Kenya's maize production. ... primary school education would enhance maize productivity. Thus, if hybrid seeds, tractor services and agricultural credit ... efficiency would increase. Key words: Socio-economic factors, farm characteristics, maize ...

  20. Modelling primate control of grasping for robotics applications

    CSIR Research Space (South Africa)

    Kleinhans, A

    2014-09-01

    Full Text Available -1 European Conference on Computer Vision (ECCV) Workshops, Zurich, Switzerland, 7 September 2014 Modelling primate control of grasping for robotics applications Ashley Kleinhans1, Serge Thill2, Benjamin Rosman1, Renaud Detry3 & Bryan Tripp4 1 CSIR...

  1. Application of a stochastic modelling framework to characterize the ...

    Indian Academy of Sciences (India)

    Application of a stochastic modelling framework to characterize the influence of ... Oxidation is described with a power law (parabolic) approach to quantify the rate of growth of all the three oxide scales. .... activation energy, R is the universal gas constant and T is the absolute temperature. For the case of parabolic oxidation, ...

  2. Geospatial application of the Water Erosion Prediction Project (WEPP) model

    Science.gov (United States)

    At the hillslope profile and/or field scale, a simple Windows graphical user interface (GUI) is available to easily specify the slope, soil, and management inputs for application of the USDA Water Erosion Prediction Project (WEPP) model. Likewise, basic small watershed configurations of a few hillsl...

  3. Geospatial application of the Water Erosion Prediction Project (WEPP) model

    Science.gov (United States)

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2013-01-01

    At the hillslope profile and/or field scale, a simple Windows graphical user interface (GUI) is available to easily specify the slope, soil, and management inputs for application of the USDA Water Erosion Prediction Project (WEPP) model. Likewise, basic small watershed configurations of a few hillslopes and channels can be created and simulated with this GUI. However,...

  4. An application of artificial intelligence for rainfall–runoff modeling

    Indian Academy of Sciences (India)

    This study proposes an application of two techniques of artificial intelligence (AI) for rainfall–runoff modeling: the artificial neural networks (ANN) and the ... Statistical parameters such as average, standard deviation, coefficient of variation, skewness, minimum and maximum values, as well as criteria such as mean square ...

  5. Application of GIS-Based Spatially Distributed Hydrologic Model in ...

    African Journals Online (AJOL)

    Application of GIS-Based Spatially Distributed Hydrologic Model in Integrated Watershed Management:A Case Study of Nzoia Basin, Kenya. ... 1986 and 2000 also revealed increased peaks in resulting hydrographs as a result of increased acreage under crops and reduced forest cover for same storm characteristics.

  6. Risk Measurement and Risk Modelling using Applications of Vine Copulas

    NARCIS (Netherlands)

    D.E. Allen (David); M.J. McAleer (Michael); A.K. Singh (Abhay)

    2014-01-01

    markdownabstract__abstract__ This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite nancial risk. Copula-based dependence modelling is a popular tool in nancial

  7. Modelling for Bio-,Agro- and Pharma-Applications

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Singh, Ravendra; Cameron, Ian

    2011-01-01

    such as mixers, fermenter as well as air compression and filtration. Milk pasteurisation is another application considered in this chapter. The intention is to look at the temperature profile of milk through the process, which has 4 distinct phases. Other case studies in this chapter include a dynamic model...

  8. WEPP Model applications for evaluations of best management practices

    Science.gov (United States)

    D. C. Flanagan; W. J. Elliott; J. R. Frankenberger; C. Huang

    2010-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based erosion prediction technology for application to small watersheds and hillslope profiles, under agricultural, forested, rangeland, and other land management conditions. Developed by the United States Department of Agriculture (USDA) over the past 25 years, WEPP simulates many of the physical processes...

  9. Model-driven semantic integration of service-oriented applications

    NARCIS (Netherlands)

    Pokraev, S.

    2009-01-01

    In this thesis, we propose a method for the semantic integration of service oriented applications. The distinctive feature of the method is that semantically-enriched service models are employed at different levels of abstraction (from business requirements to software implementation) to deliver

  10. Application of wildfire simulation models for risk analysis

    Science.gov (United States)

    Alan A. Ager; Mark A. Finney

    2009-01-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of...

  11. A Case Study Application Of Time Study Model In Paint ...

    African Journals Online (AJOL)

    This paper presents a case study in the development and application of a time study model in a paint manufacturing company. The organization specializes in the production of different grades of paint and paint containers. The paint production activities include; weighing of raw materials, drying of raw materials, dissolving ...

  12. Credibilistic programming an introduction to models and applications

    CERN Document Server

    2014-01-01

    It provides fuzzy programming approach to solve real-life decision problems in fuzzy environment. Within the framework of credibility theory, it provides a self-contained, comprehensive and up-to-date presentation of fuzzy programming models, algorithms and applications in portfolio analysis.

  13. On the applicability of models for outdoor sound (A)

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as parabolic equation methods, FFP methods, and boundary element methods all have advantages in certain situations. These approaches are accurate and predict...

  14. On the applicability of models for outdoor sound

    DEFF Research Database (Denmark)

    Rasmussen, Karsten Bo

    1999-01-01

    The suitable prediction model for outdoor sound fields depends on the situation and the application. Computationally intensive methods such as Parabolic Equation methods, FFP methods and Boundary Element Methods all have advantages in certain situations. These approaches are accurate and predict...

  15. FUNCTIONAL MODELLING FOR FAULT DIAGNOSIS AND ITS APPLICATION FOR NPP

    Directory of Open Access Journals (Sweden)

    MORTEN LIND

    2014-12-01

    Full Text Available The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM, which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  16. Semantic Model Driven Architecture Based Method for Enterprise Application Development

    Science.gov (United States)

    Wu, Minghui; Ying, Jing; Yan, Hui

    Enterprise applications have the requirements of meeting dynamic businesses processes and adopting lasted technologies flexibly, with to solve the problems caused by the nature of heterogeneous characteristic. Service-Oriented Architecture (SOA) is becoming a leading paradigm for business process integration. This research work focuses on business process modeling, proposes a semantic model-driven development method named SMDA combined with the Ontology and Model-Driven Architecture (MDA) technologies. The architecture of SMDA is presented in three orthogonal perspectives. (1) Vertical axis is the MDA 4 layers, the focus is UML profiles in M2 (meta-model layer) for ontology modeling, and three abstract levels: CIM, PIM and PSM modeling respectively. (2) Horizontal axis is different concerns involved in the development: Process, Application, Information, Organization, and Technology. (3) Traversal Axis is referred to aspects that have influence on other models of the cross-cutting axis: Architecture, Semantics, Aspect, and Pattern. The paper also introduces the modeling and transformation process in SMDA, and describes dynamic service composition supports briefly.

  17. Three-dimensional cardiac computational modelling: methods, features and applications.

    Science.gov (United States)

    Lopez-Perez, Alejandro; Sebastian, Rafael; Ferrero, Jose M

    2015-04-17

    The combination of computational models and biophysical simulations can help to interpret an array of experimental data and contribute to the understanding, diagnosis and treatment of complex diseases such as cardiac arrhythmias. For this reason, three-dimensional (3D) cardiac computational modelling is currently a rising field of research. The advance of medical imaging technology over the last decades has allowed the evolution from generic to patient-specific 3D cardiac models that faithfully represent the anatomy and different cardiac features of a given alive subject. Here we analyse sixty representative 3D cardiac computational models developed and published during the last fifty years, describing their information sources, features, development methods and online availability. This paper also reviews the necessary components to build a 3D computational model of the heart aimed at biophysical simulation, paying especial attention to cardiac electrophysiology (EP), and the existing approaches to incorporate those components. We assess the challenges associated to the different steps of the building process, from the processing of raw clinical or biological data to the final application, including image segmentation, inclusion of substructures and meshing among others. We briefly outline the personalisation approaches that are currently available in 3D cardiac computational modelling. Finally, we present examples of several specific applications, mainly related to cardiac EP simulation and model-based image analysis, showing the potential usefulness of 3D cardiac computational modelling into clinical environments as a tool to aid in the prevention, diagnosis and treatment of cardiac diseases.

  18. Development and Application of Nonlinear Land-Use Regression Models

    Science.gov (United States)

    Champendal, Alexandre; Kanevski, Mikhail; Huguenot, Pierre-Emmanuel

    2014-05-01

    The problem of air pollution modelling in urban zones is of great importance both from scientific and applied points of view. At present there are several fundamental approaches either based on science-based modelling (air pollution dispersion) or on the application of space-time geostatistical methods (e.g. family of kriging models or conditional stochastic simulations). Recently, there were important developments in so-called Land Use Regression (LUR) models. These models take into account geospatial information (e.g. traffic network, sources of pollution, average traffic, population census, land use, etc.) at different scales, for example, using buffering operations. Usually the dimension of the input space (number of independent variables) is within the range of (10-100). It was shown that LUR models have some potential to model complex and highly variable patterns of air pollution in urban zones. Most of LUR models currently used are linear models. In the present research the nonlinear LUR models are developed and applied for Geneva city. Mainly two nonlinear data-driven models were elaborated: multilayer perceptron and random forest. An important part of the research deals also with a comprehensive exploratory data analysis using statistical, geostatistical and time series tools. Unsupervised self-organizing maps were applied to better understand space-time patterns of the pollution. The real data case study deals with spatial-temporal air pollution data of Geneva (2002-2011). Nitrogen dioxide (NO2) has caught our attention. It has effects on human health and on plants; NO2 contributes to the phenomenon of acid rain. The negative effects of nitrogen dioxides on plants are the reduction of the growth, production and pesticide resistance. And finally, the effects on materials: nitrogen dioxide increases the corrosion. The data used for this study consist of a set of 106 NO2 passive sensors. 80 were used to build the models and the remaining 36 have constituted

  19. Extensions and applications of the Cox-Aalen survival model.

    Science.gov (United States)

    Scheike, Thomas H; Zhang, Mei-Jie

    2003-12-01

    Cox's regression model is the standard regression tool for survival analysis in most applications. Often, however, the model only provides a rough summary of the effect of some covariates. Therefore, if the aim is to give a detailed description of covariate effects and to consequently calculate predicted probabilities, more flexible models are needed. In another article, Scheike and Zhang (2002, Scandinavian Journal of Statistics 29, 75-88), we suggested a flexible extension of Cox's regression model, which aimed at extending the Cox model only for those covariates where additional flexibility are needed. One important advantage of the suggested approach is that even though covariates are allowed a nonparametric effect, the hassle and difficulty of finding smoothing parameters are not needed. We show how the extended model also leads to simple formulae for predicted probabilities and their standard errors, for example, in the competing risk framework.

  20. Storm surge modeling and applications in coastal areas

    Science.gov (United States)

    Dube, Shisir K.; Murty, Tad S.; Feyen, Jesse C.; Cabrera, Reggina; Harper, Bruce A.; Bales, Jerad D.; Amer, Saud A.

    2010-01-01

    This chapter introduces the reader to a wide spectrum of storm surge modeling systems used to assess the impact of tropical cyclones, covering a range of numerical methods, model domains, forcing and boundary conditions, and purposes. New technologies to obtain data such as deployment of temporary sensors and remote sensing practices to support modeling are also presented. Extensive storm surge modeling applications have been made with existing modeling systems and some of them are described in this chapter.The authors recognize the importance of evaluating river-ocean interactions in coastal environments during tropical cyclones. Therefore, the coupling of hydraulic (riverine) and storm surge models is discussed. In addition, results from studies performed in the coast of India are shown which generated maps to help emergency managers and reduce risk due to coastal inundation.

  1. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  2. Costs equations for cost modeling: application of ABC Matrix

    Directory of Open Access Journals (Sweden)

    Alex Fabiano Bertollo Santana

    2016-03-01

    Full Text Available This article aimed at providing an application of the ABC Matrix model - a management tool that models processes and activities. The ABC Matrix is based on matrix multiplication, using a fast algorithm for the development of costing systems and the subsequent translation of the costs in cost equations and systems. The research methodology is classified as a case study, using the simulation data to validate the model. The conclusion of the research is that the algorithm presented is an important development, because it is an effective approach to calculating the product cost and because it provides simple and flexible algorithm design software for controlling the cost of products

  3. Model Oriented Application Generation for Industrial Control Systems

    CERN Document Server

    Copy, B; Blanco Vinuela, E; Fernandez Adiego, B; Nogueira Ferandes, R; Prieto Barreiro, I

    2011-01-01

    The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology and a collection of development tools that standardizes the design of industrial control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to ease extensibility and maintenance of the framework, introducing a stable metamodel, a set of platformindependent models and platformspecific configurations against which code generation plugins and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider and SIEMENS PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UNICOS metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuratio...

  4. Statistical modelling for recurrent events: an application to sports injuries

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  5. Statistical modelling for recurrent events: an application to sports injuries.

    Science.gov (United States)

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-09-01

    Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Semantic Information Modeling for Emerging Applications in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Qunzhi; Natarajan, Sreedhar; Simmhan, Yogesh; Prasanna, Viktor

    2012-04-16

    Smart Grid modernizes power grid by integrating digital and information technologies. Millions of smart meters, intelligent appliances and communication infrastructures are under deployment allowing advanced IT applications to be developed to secure and manage power grid operations. Demand response (DR) is one such emerging application to optimize electricity demand by curtailing/shifting power load when peak load occurs. Existing DR approaches are mostly based on static plans such as pricing policies and load shedding schedules. However, improvements to power management applications rely on data emanating from existing and new information sources with the growth of Smart Grid information space. In particular, dynamic DR algorithms depend on information from smart meters that report interval-based power consumption measurement, HVAC systems that monitor buildings heat and humidity, and even weather forecast services. In order for emerging Smart Grid applications to take advantage of the diverse data influx, extensible information integration is required. In this paper, we develop an integrated Smart Grid information model using Semantic Web techniques and present case studies of using semantic information for dynamic DR. We show the semantic model facilitates information integration and knowledge representation for developing the next generation Smart Grid applications.

  7. Application and Performance Analysis of a New Bundle Adjustment Model

    Science.gov (United States)

    Sun, Y.; Liu, X.; Chen, R.; Wan, J.; Wang, Q.; Wang, H.; Li, Y.; Yan, L.

    2017-09-01

    As the basis for photogrammetry, Bundle Adjustment (BA) can restore the pose of cameras accurately, reconstruct the 3D models of environment, and serve as the criterion of digital production. For the classical nonlinear optimization of BA model based on the Euclidean coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a global minimum. This paper first introduces a new BA model based on parallax angle feature parametrization, and then analyses the applications and performance of the model used in photogrammetry field. To discuss the impact and the performance of the model (especially in aerial photogrammetry), experiments using two aerial datasets under different initial values were conducted. The experiment results are better than some well-known software packages of BA, and the simulation results illustrate the stability of the new model than the normal BA under the Euclidean coordinate. In all, the new BA model shows promising applications in faster and more efficient aerial photogrammetry with good convergence and fast convergence speed.

  8. Challenges of Microgrids in Remote Communities: A STEEP Model Application

    Directory of Open Access Journals (Sweden)

    Daniel Akinyele

    2018-02-01

    Full Text Available There is a growing interest in the application of microgrids around the world because of their potential for achieving a flexible, reliable, efficient and smart electrical grid system and supplying energy to off-grid communities, including their economic benefits. Several research studies have examined the application issues of microgrids. However, a lack of in-depth considerations for the enabling planning conditions has been identified as a major reason why microgrids fail in several off-grid communities. This development requires research efforts that consider better strategies and framework for sustainable microgrids in remote communities. This paper first presents a comprehensive review of microgrid technologies and their applications. It then proposes the STEEP model to examine critically the failure factors based on the social, technical, economic, environmental and policy (STEEP perspectives. The model details the key dimensions and actions necessary for addressing the challenge of microgrid failure in remote communities. The study uses remote communities within Nigeria, West Africa, as case studies and demonstrates the need for the STEEP approach for better understanding of microgrid planning and development. Better insights into microgrid systems are expected to address the drawbacks and improve the situation that can lead to widespread and sustainable applications in off-grid communities around the world in the future. The paper introduces the sustainable planning framework (SPF based on the STEEP model, which can form a general basis for planning microgrids in any remote location.

  9. Application distribution model and related security attacks in VANET

    Science.gov (United States)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  10. Application of online modeling to the operation of SLC

    Energy Technology Data Exchange (ETDEWEB)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-02-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desired optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beamline matching and optimization of injection and extraction efficiencies and beam transmission. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction.

  11. Road Assessment Model and Pilot Application in China

    Directory of Open Access Journals (Sweden)

    Tiejun Zhang

    2014-01-01

    Full Text Available Risk assessment of roads is an effective approach for road agencies to determine safety improvement investments. It can increases the cost-effective returns in crash and injury reductions. To get a powerful Chinese risk assessment model, Research Institute of Highway (RIOH is developing China Road Assessment Programme (ChinaRAP model to show the traffic crashes in China in partnership with International Road Assessment Programme (iRAP. The ChinaRAP model is based upon RIOH’s achievements and iRAP models. This paper documents part of ChinaRAP’s research work, mainly including the RIOH model and its pilot application in a province in China.

  12. Small-signal neural models and their applications.

    Science.gov (United States)

    Basu, Arindam

    2012-02-01

    This paper introduces the use of the concept of small-signal analysis, commonly used in circuit design, for understanding neural models. We show that neural models, varying in complexity from Hodgkin-Huxley to integrate and fire have similar small-signal models when their corresponding differential equations are close to the same bifurcation with respect to input current. Three applications of small-signal neural models are shown. First, some of the properties of cortical neurons described by Izhikevich are explained intuitively through small-signal analysis. Second, we use small-signal models for deriving parameters for a simple neural model (such as resonate and fire) from a more complicated but biophysically relevant one like Morris-Lecar. We show similarity in the subthreshold behavior of the simple and complicated model when they are close to a Hopf bifurcation and a saddle-node bifurcation. Hence, this is useful to correctly tune simple neural models for large-scale cortical simulations. Finaly, the biasing regime of a silicon ion channel is derived by comparing its small-signal model with a Hodgkin-Huxley-type model.

  13. Risk Measurement and Risk Modelling Using Applications of Vine Copulas

    Directory of Open Access Journals (Sweden)

    David E. Allen

    2017-09-01

    Full Text Available This paper features an application of Regular Vine copulas which are a novel and recently developed statistical and mathematical tool which can be applied in the assessment of composite financial risk. Copula-based dependence modelling is a popular tool in financial applications, but is usually applied to pairs of securities. By contrast, Vine copulas provide greater flexibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which may be arranged and analysed in a tree structure to explore multiple dependencies. The paper features the use of Regular Vine copulas in an analysis of the co-dependencies of 10 major European Stock Markets, as represented by individual market indices and the composite STOXX 50 index. The sample runs from 2005 to the end of 2013 to permit an exploration of how correlations change indifferent economic circumstances using three different sample periods: pre-GFC (January 2005–July 2007, GFC (July 2007– September 2009, and post-GFC periods (September 2009–December 2013. The empirical results suggest that the dependencies change in a complex manner, and are subject to change in different economic circumstances. One of the attractions of this approach to risk modelling is the flexibility in the choice of distributions used to model co-dependencies. The practical application of Regular Vine metrics is demonstrated via an example of the calculation of the VaR of a portfolio made up of the indices.

  14. Model-Driven Development of Automation and Control Applications: Modeling and Simulation of Control Sequences

    Directory of Open Access Journals (Sweden)

    Timo Vepsäläinen

    2014-01-01

    Full Text Available The scope and responsibilities of control applications are increasing due to, for example, the emergence of industrial internet. To meet the challenge, model-driven development techniques have been in active research in the application domain. Simulations that have been traditionally used in the domain, however, have not yet been sufficiently integrated to model-driven control application development. In this paper, a model-driven development process that includes support for design-time simulations is complemented with support for simulating sequential control functions. The approach is implemented with open source tools and demonstrated by creating and simulating a control system model in closed-loop with a large and complex model of a paper industry process.

  15. Language Model Applications to Spelling with Brain-Computer Interfaces

    Science.gov (United States)

    Mora-Cortes, Anderson; Manyakov, Nikolay V.; Chumerin, Nikolay; Van Hulle, Marc M.

    2014-01-01

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

  16. Language Model Applications to Spelling with Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Anderson Mora-Cortes

    2014-03-01

    Full Text Available Within the Ambient Assisted Living (AAL community, Brain-Computer Interfaces (BCIs have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  17. Language model applications to spelling with Brain-Computer Interfaces.

    Science.gov (United States)

    Mora-Cortes, Anderson; Manyakov, Nikolay V; Chumerin, Nikolay; Van Hulle, Marc M

    2014-03-26

    Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies.

  18. A review of toxicity models for realistic atmospheric applications

    Science.gov (United States)

    Gunatilaka, Ajith; Skvortsov, Alex; Gailis, Ralph

    2014-02-01

    There are many applications that need to study human health effects caused by exposure to toxic chemicals. Risk analysis for industrial sites, study of population health impacts of atmospheric pollutants, and operations research for assessing the potential impacts of chemical releases in military contexts are some examples. Because of safety risks and the high cost of field trials involving hazardous chemical releases, computer simulations are widely used for such studies. Modelling of atmospheric transport and dispersion of chemicals released into the atmosphere to determine the toxic chemical concentrations to which individuals will be exposed is one main component of these simulations, and there are well established atmospheric dispersion models for this purpose. Estimating the human health effects caused by the exposure to these predicted toxic chemical concentrations is the other main component. A number of different toxicity models for assessing the health effects of toxic chemical exposure are found in the literature. Because these different models have been developed based on different assumptions about the plume characteristics, chemical properties, and physiological response, there is a need to review and compare these models to understand their applicability. This paper reviews several toxicity models described in the literature. The paper also presents results of applying different toxicity models to simulated concentration time series data. These results show that the use of ensemble mean concentrations, which are what atmospheric dispersion models typically provide, to estimate human health effects of exposure to hazardous chemical releases may underestimate their impact when toxic exponent, n, of the chemical is greater than one; the opposite phenomenon appears to hold when n biological recovery processes may predict greater toxicity than the explicitly parameterised models. Despite the wide variety of models of varying degrees of complexity that is

  19. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  20. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  1. Instructional Storytelling: Application of the Clinical Judgment Model in Nursing.

    Science.gov (United States)

    Timbrell, Jessica

    2017-05-01

    Little is known about the teaching and learning implications of instructional storytelling (IST) in nursing education or its potential connection to nursing theory. The literature establishes storytelling as a powerful teaching-learning method in the educational, business, humanities, and health sectors, but little exploration exists that is specific to nursing. An example of a story demonstrating application of the domains of Tanner's clinical judgment model links storytelling with learning outcomes appropriate for the novice nursing student. Application of Tanner's clinical judgment model offers consistency of learning experience while preserving the creativity inherent in IST. Further research into student learning outcomes achievement using IST is warranted as a step toward establishing best practices with IST in nursing education. [J Nurs Educ. 2017;56(5):305-308.]. Copyright 2017, SLACK Incorporated.

  2. Powder consolidation using cold spray process modeling and emerging applications

    CERN Document Server

    Moridi, Atieh

    2017-01-01

    This book first presents different approaches to modeling of the cold spray process with the aim of extending current understanding of its fundamental principles and then describes emerging applications of cold spray. In the coverage of modeling, careful attention is devoted to the assessment of critical and erosion velocities. In order to reveal the phenomenological characteristics of interface bonding, severe, localized plastic deformation and material jet formation are studied. Detailed consideration is also given to the effect of macroscopic defects such as interparticle boundaries and subsequent splat boundary cracking on the mechanical behavior of cold spray coatings. The discussion of applications focuses in particular on the repair of damaged parts and additive manufacturing in various disciplines from aerospace to biomedical engineering. Key aspects include a systematic study of defect shape and the ability of cold spray to fill the defect, examination of the fatigue behavior of coatings for structur...

  3. Generalized Bogoliubov Polariton Model: An Application to Stock Exchange Market

    Science.gov (United States)

    Thuy Anh, Chu; Anh, Truong Thi Ngoc; Lan, Nguyen Tri; Viet, Nguyen Ai

    2016-06-01

    A generalized Bogoliubov method for investigation non-simple and complex systems was developed. We take two branch polariton Hamiltonian model in second quantization representation and replace the energies of quasi-particles by two distribution functions of research objects. Application to stock exchange market was taken as an example, where the changing the form of return distribution functions from Boltzmann-like to Gaussian-like was studied.

  4. Modeling of bubble dynamics in relation to medical applications

    Energy Technology Data Exchange (ETDEWEB)

    Amendt, P.A.; London, R.A. [Lawrence Livermore National Lab., CA (United States); Strauss, M. [California Univ., Davis, CA (United States)]|[Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev] [and others

    1997-03-12

    In various pulsed-laser medical applications, strong stress transients can be generated in advance of vapor bubble formation. To better understand the evolution of stress transients and subsequent formation of vapor bubbles, two-dimensional simulations are presented in channel or cylindrical geometry with the LATIS (LAser TISsue) computer code. Differences with one-dimensional modeling are explored, and simulated experimental conditions for vapor bubble generation are presented and compared with data. 22 refs., 8 figs.

  5. Generalisation of geographic information cartographic modelling and applications

    CERN Document Server

    Mackaness, William A; Sarjakoski, L Tiina

    2011-01-01

    Theoretical and Applied Solutions in Multi Scale MappingUsers have come to expect instant access to up-to-date geographical information, with global coverage--presented at widely varying levels of detail, as digital and paper products; customisable data that can readily combined with other geographic information. These requirements present an immense challenge to those supporting the delivery of such services (National Mapping Agencies (NMA), Government Departments, and private business. Generalisation of Geographic Information: Cartographic Modelling and Applications provides detailed review

  6. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  7. Modeling of Facial Wrinkles for Applications in Computer Vision

    OpenAIRE

    Batool, Nazre; Chellappa, Rama

    2016-01-01

    International audience; Analysis and modeling of aging human faces have been extensively studied in the past decade for applications in computer vision such as age estimation, age progression and face recognition across aging. Most of this research work is based on facial appearance and facial features such as face shape, geometry, location of landmarks and patch-based texture features. Despite the recent availability of higher resolution, high quality facial images, we do not find much work ...

  8. Introduction to the papers of TWG06: Applications and modelling

    OpenAIRE

    Carreira, Susana; Barquero, Berta; Kaiser, Gabriele; Lingefjard, Thomas; Wake, Geoff

    2015-01-01

    International audience; The contributions that formed the working basis of the Thematic Working Group on Applications and modelling were characterized by a strong diversity in the topics and issues that were addressed. The group’s research field has thus shown to be both active and fruitful whilst maintaining its distinctive facet of being inclusive in terms of the different theoretical, methodological and philosophical perspectives taken by its researchers.

  9. Mathematical problem solving, modelling, applications, and links to other subjects

    OpenAIRE

    Blum, Werner; Niss, Mogens

    1989-01-01

    The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of developm...

  10. Advances in Intelligent Modelling and Simulation Simulation Tools and Applications

    CERN Document Server

    Oplatková, Zuzana; Carvalho, Marco; Kisiel-Dorohinicki, Marek

    2012-01-01

    The human capacity to abstract complex systems and phenomena into simplified models has played a critical role in the rapid evolution of our modern industrial processes and scientific research. As a science and an art, Modelling and Simulation have been one of the core enablers of this remarkable human trace, and have become a topic of great importance for researchers and practitioners. This book was created to compile some of the most recent concepts, advances, challenges and ideas associated with Intelligent Modelling and Simulation frameworks, tools and applications. The first chapter discusses the important aspects of a human interaction and the correct interpretation of results during simulations. The second chapter gets to the heart of the analysis of entrepreneurship by means of agent-based modelling and simulations. The following three chapters bring together the central theme of simulation frameworks, first describing an agent-based simulation framework, then a simulator for electrical machines, and...

  11. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  12. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  13. Radiation Belt Environment Model: Application to Space Weather and Beyond

    Science.gov (United States)

    Fok, Mei-Ching H.

    2011-01-01

    Understanding the dynamics and variability of the radiation belts are of great scientific and space weather significance. A physics-based Radiation Belt Environment (RBE) model has been developed to simulate and predict the radiation particle intensities. The RBE model considers the influences from the solar wind, ring current and plasmasphere. It takes into account the particle drift in realistic, time-varying magnetic and electric field, and includes diffusive effects of wave-particle interactions with various wave modes in the magnetosphere. The RBE model has been used to perform event studies and real-time prediction of energetic electron fluxes. In this talk, we will describe the RBE model equation, inputs and capabilities. Recent advancement in space weather application and artificial radiation belt study will be discussed as well.

  14. Mathematical modelling of anaerobic digestion processes: applications and future needs

    DEFF Research Database (Denmark)

    Batstone, Damien J.; Puyol, Daniel; Flores Alsina, Xavier

    2015-01-01

    of the role of the central carbon catabolic metabolism in anaerobic digestion, with an increased importance of phosphorous, sulfur, and metals as electron source and sink, and consideration of hydrogen and methane as potential electron sources. The paradigm of anaerobic digestion is challenged by anoxygenic...... phototrophism, where energy is relatively cheap, but electron transfer is expensive. These new processes are commonly not compatible with the existing structure of anaerobic digestion models. These core issues extend to application of anaerobic digestion in domestic plant-wide modelling, with the need......Anaerobic process modelling is a mature and well-established field, largely guided by a mechanistic model structure that is defined by our understanding of underlying processes. This led to publication of the IWA ADM1, and strong supporting, analytical, and extension research in the 15 years since...

  15. Anatomical models for space radiation applications: an overview.

    Science.gov (United States)

    Atwell, W

    1994-10-01

    Extremely detailed computerized anatomical male (CAM) and female (CAF) models that have been developed for use in space radiation analyses are discussed and reviewed. Recognizing that the level of detail may currently be inadequate for certain radiological applications, one of the purposes of this paper is to elicit specific model improvements or requirements from the scientific user-community. Methods and rationale are presented which describe the approach used in the Space Shuttle program to extrapolate dosimetry measurements (skin doses) to realistic astronaut body organ doses. Several mission scenarios are presented which demonstrate the utility of the anatomical models for obtaining specific body organ exposure estimates and can be used for establishing cancer morbidity and mortality risk assessments. These exposure estimates are based on the trapped Van Allen belt and galactic cosmic radiation environment models and data from the major historical solar particle events.

  16. Development and application of new quality model for software projects.

    Science.gov (United States)

    Karnavel, K; Dillibabu, R

    2014-01-01

    The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects.

  17. Systems Engineering Model and Training Application for Desktop Environment

    Science.gov (United States)

    May, Jeffrey T.

    2010-01-01

    Provide a graphical user interface based simulator for desktop training, operations and procedure development and system reference. This simulator allows for engineers to train and further understand the dynamics of their system from their local desktops. It allows the users to train and evaluate their system at a pace and skill level based on the user's competency and from a perspective based on the user's need. The simulator will not require any special resources to execute and should generally be available for use. The interface is based on a concept of presenting the model of the system in ways that best suits the user's application or training needs. The three levels of views are Component View, the System View (overall system), and the Console View (monitor). These views are portals into a single model, so changing the model from one view or from a model manager Graphical User Interface will be reflected on all other views.

  18. Sensors advancements in modeling, design issues, fabrication and practical applications

    CERN Document Server

    Mukhopadhyay, Subhash Chandra

    2008-01-01

    Sensors are the most important component in any system and engineers in any field need to understand the fundamentals of how these components work, how to select them properly and how to integrate them into an overall system. This book has outlined the fundamentals, analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors, electromagnetic, capacitive, ultrasonic, vision, Terahertz, displacement, fibre-optic and so on. The book: addresses the identification, modeling, selection, operation and integration of a wide variety of se

  19. Models of Hydrogel Swelling with Applications to Hydration Sensing

    Directory of Open Access Journals (Sweden)

    Kathryn Morton

    2007-09-01

    Full Text Available Hydrogels, polymers and various other composite materials may be used insensing applications in which the swelling or de-swelling of the material in response tosome analyte is converted via a transducer to a measurable signal. In this paper, we analyzemodels used to predict the swelling behavior of hydrogels that may be used in applicationsrelated to hydration monitoring in humans. Preliminary experimental data related toosmolality changes in fluids is presented to compare to the theoretical models. Overall,good experimental agreement with the models is achieved.

  20. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  1. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Science.gov (United States)

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  2. The DO ART Model: An Ethical Decision-Making Model Applicable to Art Therapy

    Science.gov (United States)

    Hauck, Jessica; Ling, Thomson

    2016-01-01

    Although art therapists have discussed the importance of taking a positive stance in terms of ethical decision making (Hinz, 2011), an ethical decision-making model applicable for the field of art therapy has yet to emerge. As the field of art therapy continues to grow, an accessible, theoretically grounded, and logical decision-making model is…

  3. Multivariate logit copula model with an application to dental data.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K; Karlis, Dimitris

    2008-12-30

    Applications of copulas for multivariate continuous data abound but there are only a few that treat multivariate binary data. In the present paper, we model multivariate binary data based on copulas using mixtures of max-infinitely divisible copulas, introduced by Joe and Hu (J. Multivar. Anal. 1996; 57(2): 240-265). When applying copulas to binary data the marginal distributions also contribute to the dependence measures. We propose the use of covariate information in the copula parameters to obtain a direct effect of a covariate on dependence. To deal with model uncertainty due to selecting among several candidate models, we use a model averaging technique. We apply the model to data from the Signal-Tandmobiel dental study and, in particular, to four binary responses that refer to caries experience in the mandibular and maxillary left and right molars. We aim to model Kendall's tau associations between them, and examine how covariate information affects these associations. We found that there are systematically larger associations between the two mandibular and the two maxillary molars. Using covariates to model these associations more closely, we found that the systematic fluoride and age of the children affect the associations. Note that such relationships could not have been revealed by methods that focus on the marginal models. Copyright 2008 John Wiley & Sons, Ltd.

  4. Implementation and validation of model-based multi-threaded Java applications and Web services

    OpenAIRE

    Xue, Pengfei

    2008-01-01

    In the software engineering world, many modelling notations and languages have been developed to aid application development. The technologies, Java and Web services, play an increasingly important role in web applications. However, because of issues of complexity, it is difficult to build multi-threaded Java applications and Web Service applications, and even more difficult to model. Furthermore, it is difficult to reconcile the directly-coded application with the model-based application....

  5. A double continuum hydrological model for glacier applications

    Directory of Open Access Journals (Sweden)

    B. de Fleurian

    2014-01-01

    Full Text Available The flow of glaciers and ice streams is strongly influenced by the presence of water at the interface between ice and bed. In this paper, a hydrological model evaluating the subglacial water pressure is developed with the final aim of estimating the sliding velocities of glaciers. The global model fully couples the subglacial hydrology and the ice dynamics through a water-dependent friction law. The hydrological part of the model follows a double continuum approach which relies on the use of porous layers to compute water heads in inefficient and efficient drainage systems. This method has the advantage of a relatively low computational cost that would allow its application to large ice bodies such as Greenland or Antarctica ice streams. The hydrological model has been implemented in the finite element code Elmer/Ice, which simultaneously computes the ice flow. Herein, we present an application to the Haut Glacier d'Arolla for which we have a large number of observations, making it well suited to the purpose of validating both the hydrology and ice flow model components. The selection of hydrological, under-determined parameters from a wide range of values is guided by comparison of the model results with available glacier observations. Once this selection has been performed, the coupling between subglacial hydrology and ice dynamics is undertaken throughout a melt season. Results indicate that this new modelling approach for subglacial hydrology is able to reproduce the broad temporal and spatial patterns of the observed subglacial hydrological system. Furthermore, the coupling with the ice dynamics shows good agreement with the observed spring speed-up.

  6. Case studies of computer model applications in consulting practice

    Science.gov (United States)

    Siebein, Gary; Paek, Hyun; Lorang, Mark; McGuinnes, Courtney

    2002-05-01

    Six case studies of computer model applications in a consulting practice will be presented to present the range of issues that can be studied with computer models as well as to understand the limitations of the technique at the present time. Case studies of elliptical conference rooms demonstrate basic acoustic ray principles and suggest remediation strategies. Models of a large themed entertainment venue with multiple amplified sound sources show how visualization of the acoustic ray paths can assist a consultant and client in value engineering locations and amounts of acoustic materials. The acoustic problems with an angled ceiling and large rear wall were studied when an historic church was converted to a music performance hall. The computer model of an historic hall did not present enough detailed information and was supplemented with physical model studies and full size mock-up tests of the insertion of an elevator door that would open directly into the concert room. Studies to demonstrate the amount of room model detail to obtain realistic auralizations were also conducted. The integration of architectural acoustic design and audio system design were studied in computer models of a large church sanctuary.

  7. Hybrid Geoid Model: Theory and Application in Brazil.

    Science.gov (United States)

    Arana, Daniel; Camargo, Paulo O; Guimarães, Gabriel N

    2017-01-01

    Determination of the ellipsoidal height by Global Navigation Satellite Systems (GNSS) is becoming better known and used for purposes of leveling with the aid of geoid models. However, the disadvantage of this method is the quality of the geoid models, which degrade heights and limit the application of the method. In order to provide better quality in transforming height using GNSS leveling, this research aims to develop a hybridization methodology of gravimetric geoid models EGM08, MAPGEO2015 and GEOIDSP2014 for the State of São Paulo, providing more consistent models with GNSS technology. Radial Basis Function (RBF) neural networks were used to obtain the corrector surface, based on differences between geoid model undulations and the undulations obtained by GNSS tracking in benchmarks. The experiments showed that the most suitable interpolation for correction modeling is the linear RBF. Checkpoints indicate that the geoid hybrid models feature root mean square deviation ± 0.107, ± 0.104 and ± 0.098 m, respectively. The results shows an improvement of 30 to 40% in consistencies compared with the gravimetric geoids, providing users with better quality in transformation of geometric to orthometric heights.

  8. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    Science.gov (United States)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  9. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  10. Hybrid Geoid Model: Theory and Application in Brazil

    Directory of Open Access Journals (Sweden)

    DANIEL ARANA

    Full Text Available Determination of the ellipsoidal height by Global Navigation Satellite Systems (GNSS is becoming better known and used for purposes of leveling with the aid of geoid models. However, the disadvantage of this method is the quality of the geoid models, which degrade heights and limit the application of the method. In order to provide better quality in transforming height using GNSS leveling, this research aims to develop a hybridization methodology of gravimetric geoid models EGM08, MAPGEO2015 and GEOIDSP2014 for the State of São Paulo, providing more consistent models with GNSS technology. Radial Basis Function (RBF neural networks were used to obtain the corrector surface, based on differences between geoid model undulations and the undulations obtained by GNSS tracking in benchmarks. The experiments showed that the most suitable interpolation for correction modeling is the linear RBF. Checkpoints indicate that the geoid hybrid models feature root mean square deviation ± 0.107, ± 0.104 and ± 0.098 m, respectively. The results shows an improvement of 30 to 40% in consistencies compared with the gravimetric geoids, providing users with better quality in transformation of geometric to orthometric heights.

  11. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  12. Influence of rainfall observation network on model calibration and application

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-01-01

    Full Text Available The objective in this study is to investigate the influence of the spatial resolution of the rainfall input on the model calibration and application. The analysis is carried out by varying the distribution of the raingauge network. A meso-scale catchment located in southwest Germany has been selected for this study. First, the semi-distributed HBV model is calibrated with the precipitation interpolated from the available observed rainfall of the different raingauge networks. An automatic calibration method based on the combinatorial optimization algorithm simulated annealing is applied. The performance of the hydrological model is analyzed as a function of the raingauge density. Secondly, the calibrated model is validated using interpolated precipitation from the same raingauge density used for the calibration as well as interpolated precipitation based on networks of reduced and increased raingauge density. Lastly, the effect of missing rainfall data is investigated by using a multiple linear regression approach for filling in the missing measurements. The model, calibrated with the complete set of observed data, is then run in the validation period using the above described precipitation field. The simulated hydrographs obtained in the above described three sets of experiments are analyzed through the comparisons of the computed Nash-Sutcliffe coefficient and several goodness-of-fit indexes. The results show that the model using different raingauge networks might need re-calibration of the model parameters, specifically model calibrated on relatively sparse precipitation information might perform well on dense precipitation information while model calibrated on dense precipitation information fails on sparse precipitation information. Also, the model calibrated with the complete set of observed precipitation and run with incomplete observed data associated with the data estimated using multiple linear regressions, at the locations treated as

  13. Modeling of shape memory alloys and application to porous materials

    Science.gov (United States)

    Panico, Michele

    In the last two decades the number of innovative applications for advanced materials has been rapidly increasing. Shape memory alloys (SMAs) are an exciting class of these materials which exhibit large reversible stresses and strains due to a thermoelastic phase transformation. SMAs have been employed in the biomedical field for producing cardiovascular stents, shape memory foams have been successfully tested as bone implant material, and SMAs are being used as deployable switches in aerospace applications. The behavior of shape memory alloys is intrinsically complex due to the coupling of phase transformation with thermomechanical loading, so it is critical for constitutive models to correctly simulate their response over a wide range of stress and temperature. In the first part of this dissertation, we propose a macroscopic phenomenological model for SMAs that is based on the classical framework of thermodynamics of irreversible processes and accounts for the effect of multiaxial stress states and non-proportional loading histories. The model is able to account for the evolution of both self-accommodated and oriented martensite. Moreover, reorientation of the product phase according to loading direction is specifically accounted for. Computational tests demonstrate the ability of the model to simulate the main aspects of the shape memory response in a one-dimensional setting and some of the features that have been experimentally found in the case of multi-axial non-proportional loading histories. In the second part of this dissertation, this constitutive model has been used to study the mesoscopic behavior of porous shape memory alloys with particular attention to the mechanical response under cyclic loading conditions. In order to perform numerical simulations, the model was implemented into the commercial finite element code ABAQUS. Due to stress concentrations in a porous microstructure, the constitutive law was enhanced to account for the development of

  14. Elasto-geometrical modeling and calibration of robot manipulators: Application to machining and forming applications

    OpenAIRE

    Marie, Stéphane; Courteille, Eric; Maurine, Patrick

    2013-01-01

    International audience; This paper proposes an original elasto-geometrical calibration method to improve the static pose accuracy of industrial robots involved in machining, forming or assembly applications. Two approaches are presented respectively based on an analytical parametric modeling and a Takagi-Sugeno fuzzy inference system. These are described and then discussed. This allows to list the main drawbacks and advantages of each of them with respect to the task and the user requirements...

  15. Practical Application of Model Checking in Software Verification

    Science.gov (United States)

    Havelund, Klaus; Skakkebaek, Jens Ulrik

    1999-01-01

    This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.

  16. Improved dual sided doped memristor: modelling and applications

    Directory of Open Access Journals (Sweden)

    Anup Shrivastava

    2014-05-01

    Full Text Available Memristor as a novel and emerging electronic device having vast range of applications suffer from poor frequency response and saturation length. In this paper, the authors present a novel and an innovative device structure for the memristor with two active layers and its non-linear ionic drift model for an improved frequency response and saturation length. The authors investigated and compared the I–V characteristics for the proposed model with the conventional memristors and found better results in each case (different window functions for the proposed dual sided doped memristor. For circuit level simulation, they developed a SPICE model of the proposed memristor and designed some logic gates based on hybrid complementary metal oxide semiconductor memristive logic (memristor ratioed logic. The proposed memristor yields improved results in terms of noise margin, delay time and dynamic hazards than that of the conventional memristors (single active layer memristors.

  17. Clinical application of the five-factor model.

    Science.gov (United States)

    Widiger, Thomas A; Presnall, Jennifer Ruth

    2013-12-01

    The Five-Factor Model (FFM) has become the predominant dimensional model of general personality structure. The purpose of this paper is to suggest a clinical application. A substantial body of research indicates that the personality disorders included within the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM) can be understood as extreme and/or maladaptive variants of the FFM (the acronym "DSM" refers to any particular edition of the APA DSM). In addition, the current proposal for the forthcoming fifth edition of the DSM (i.e., DSM-5) is shifting closely toward an FFM dimensional trait model of personality disorder. Advantages of this shifting conceptualization are discussed, including treatment planning. © 2012 Wiley Periodicals, Inc.

  18. Application of simplified model to sensitivity analysis of solidification process

    Directory of Open Access Journals (Sweden)

    R. Szopa

    2007-12-01

    Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.

  19. Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy

    2012-01-01

    terms have proven to have strong sparsity-inducing properties. In this work, we design pilot assisted channel estimators for OFDM wireless receivers within the framework of sparse Bayesian learning by defining hierarchical Bayesian prior models that lead to sparsity-inducing penalization terms......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization....... The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state...

  20. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  1. A review of visual MODFLOW applications in groundwater modelling

    Science.gov (United States)

    Hariharan, V.; Shankar, M. Uma

    2017-11-01

    Visual MODLOW is a Graphical User Interface for the USGS MODFLOW. It is a commercial software that is popular among the hydrogeologists for its user-friendly features. The software is mainly used for Groundwater flow and contaminant transport models under different conditions. This article is intended to review the versatility of its applications in groundwater modelling for the last 22 years. Agriculture, airfields, constructed wetlands, climate change, drought studies, Environmental Impact Assessment (EIA), landfills, mining operations, river and flood plain monitoring, salt water intrusion, soil profile surveys, watershed analyses, etc., are the areas where the software has been reportedly used till the current date. The review will provide a clarity on the scope of the software in groundwater modelling and research.

  2. Time Series Analysis, Modeling and Applications A Computational Intelligence Perspective

    CERN Document Server

    Chen, Shyi-Ming

    2013-01-01

    Temporal and spatiotemporal data form an inherent fabric of the society as we are faced with streams of data coming from numerous sensors, data feeds, recordings associated with numerous areas of application embracing physical and human-generated phenomena (environmental data, financial markets, Internet activities, etc.). A quest for a thorough analysis, interpretation, modeling and prediction of time series comes with an ongoing challenge for developing models that are both accurate and user-friendly (interpretable). The volume is aimed to exploit the conceptual and algorithmic framework of Computational Intelligence (CI) to form a cohesive and comprehensive environment for building models of time series. The contributions covered in the volume are fully reflective of the wealth of the CI technologies by bringing together ideas, algorithms, and numeric studies, which convincingly demonstrate their relevance, maturity and visible usefulness. It reflects upon the truly remarkable diversity of methodological a...

  3. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  4. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra

    2012-01-01

    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  5. On the application of copula in modeling maintenance contract

    Science.gov (United States)

    Iskandar, B. P.; Husniah, H.

    2016-02-01

    This paper deals with the application of copula in maintenance contracts for a nonrepayable item. Failures of the item are modeled using a two dimensional approach where age and usage of the item and this requires a bi-variate distribution to modelling failures. When the item fails then corrective maintenance (CM) is minimally repaired. CM can be outsourced to an external agent or done in house. The decision problem for the owner is to find the maximum total profit whilst for the agent is to determine the optimal price of the contract. We obtain the mathematical models of the decision problems for the owner as well as the agent using a Nash game theory formulation.

  6. A model of moral identity: applications for education.

    Science.gov (United States)

    Matsuba, M Kyle; Murzyn, Theresa; Hart, Daniel

    2011-01-01

    The purpose of this chapter is to build an intellectual bridge between moral psychology and education. Our hope is that the findings from moral psychology will inform and explain best practices in moral education. With that end in mind, we briefly and selectively review the moral education and character education literature highlighting some of the challenges these domains have faced. Next, we review the moral identity literature and offer our own model of moral identity formation emphasizing the "characteristic adaptations" (i.e., moral orientation, moral self, moral emotions, and social relationships and opportunities) of the model. Finally, we illustrate and explain how some of these "characteristic adaptations" have been or could be used in the development of successful moral education programs, and provide specific examples for application of our model in the domain of sex education.

  7. The Logistic Maturity Model: Application to a Fashion Company

    Directory of Open Access Journals (Sweden)

    Claudia Battista

    2013-08-01

    Full Text Available This paper describes the structure of the logistic maturity model (LMM in detail and shows the possible improvements that can be achieved by using this model in terms of the identification of the most appropriate actions to be taken in order to increase the performance of the logistics processes in industrial companies. The paper also gives an example of the LMM’s application to a famous Italian female fashion firm, which decided to use the model as a guideline for the optimization of its supply chain. Relying on a 5-level maturity staircase, specific achievement indicators as well as key performance indicators and best practices are defined and related to each logistics area/process/sub-process, allowing any user to easily and rapidly understand the more critical logistical issues in terms of process immaturity.

  8. Polycrystalline CVD diamond device level modeling for particle detection applications

    Science.gov (United States)

    Morozzi, A.; Passeri, D.; Kanxheri, K.; Servoli, L.; Lagomarsino, S.; Sciortino, S.

    2016-12-01

    Diamond is a promising material whose excellent physical properties foster its use for radiation detection applications, in particular in those hostile operating environments where the silicon-based detectors behavior is limited due to the high radiation fluence. Within this framework, the application of Technology Computer Aided Design (TCAD) simulation tools is highly envisaged for the study, the optimization and the predictive analysis of sensing devices. Since the novelty of using diamond in electronics, this material is not included in the library of commercial, state-of-the-art TCAD software tools. In this work, we propose the development, the application and the validation of numerical models to simulate the electrical behavior of polycrystalline (pc)CVD diamond conceived for diamond sensors for particle detection. The model focuses on the characterization of a physically-based pcCVD diamond bandgap taking into account deep-level defects acting as recombination centers and/or trap states. While a definite picture of the polycrystalline diamond band-gap is still debated, the effect of the main parameters (e.g. trap densities, capture cross-sections, etc.) can be deeply investigated thanks to the simulated approach. The charge collection efficiency due to β -particle irradiation of diamond materials provided by different vendors and with different electrode configurations has been selected as figure of merit for the model validation. The good agreement between measurements and simulation findings, keeping the traps density as the only one fitting parameter, assesses the suitability of the TCAD modeling approach as a predictive tool for the design and the optimization of diamond-based radiation detectors.

  9. Linear mixed effects models under inequality constraints with applications.

    Directory of Open Access Journals (Sweden)

    Laura Farnan

    Full Text Available Constraints arise naturally in many scientific experiments/studies such as in, epidemiology, biology, toxicology, etc. and often researchers ignore such information when analyzing their data and use standard methods such as the analysis of variance (ANOVA. Such methods may not only result in a loss of power and efficiency in costs of experimentation but also may result poor interpretation of the data. In this paper we discuss constrained statistical inference in the context of linear mixed effects models that arise naturally in many applications, such as in repeated measurements designs, familial studies and others. We introduce a novel methodology that is broadly applicable for a variety of constraints on the parameters. Since in many applications sample sizes are small and/or the data are not necessarily normally distributed and furthermore error variances need not be homoscedastic (i.e. heterogeneity in the data we use an empirical best linear unbiased predictor (EBLUP type residual based bootstrap methodology for deriving critical values of the proposed test. Our simulation studies suggest that the proposed procedure maintains the desired nominal Type I error while competing well with other tests in terms of power. We illustrate the proposed methodology by re-analyzing a clinical trial data on blood mercury level. The methodology introduced in this paper can be easily extended to other settings such as nonlinear and generalized regression models.

  10. Current developments in soil organic matter modeling and the expansion of model applications: a review

    Science.gov (United States)

    Campbell, Eleanor E.; Paustian, Keith

    2015-12-01

    Soil organic matter (SOM) is an important natural resource. It is fundamental to soil and ecosystem functions across a wide range of scales, from site-specific soil fertility and water holding capacity to global biogeochemical cycling. It is also a highly complex material that is sensitive to direct and indirect human impacts. In SOM research, simulation models play an important role by providing a mathematical framework to integrate, examine, and test the understanding of SOM dynamics. Simulation models of SOM are also increasingly used in more ‘applied’ settings to evaluate human impacts on ecosystem function, and to manage SOM for greenhouse gas mitigation, improved soil health, and sustainable use as a natural resource. Within this context, there is a need to maintain a robust connection between scientific developments in SOM modeling approaches and SOM model applications. This need forms the basis of this review. In this review we first provide an overview of SOM modeling, focusing on SOM theory, data-model integration, and model development as evidenced by a quantitative review of SOM literature. Second, we present the landscape of SOM model applications, focusing on examples in climate change policy. We conclude by discussing five areas of recent developments in SOM modeling including: (1) microbial roles in SOM stabilization; (2) modeling SOM saturation kinetics; (3) temperature controls on decomposition; (4) SOM dynamics in deep soil layers; and (5) SOM representation in earth system models. Our aim is to comprehensively connect SOM model development to its applications, revealing knowledge gaps in need of focused interdisciplinary attention and exposing pitfalls that, if avoided, can lead to best use of SOM models to support policy initiatives and sustainable land management solutions.

  11. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  12. A Global Modeling Framework for Plasma Kinetics: Development and Applications

    Science.gov (United States)

    Parsey, Guy Morland

    The modern study of plasmas, and applications thereof, has developed synchronously with com- puter capabilities since the mid-1950s. Complexities inherent to these charged-particle, many- body, systems have resulted in the development of multiple simulation methods (particle-in-cell, fluid, global modeling, etc.) in order to both explain observed phenomena and predict outcomes of plasma applications. Recognizing that different algorithms are chosen to best address specific topics of interest, this thesis centers around the development of an open-source global model frame- work for the focused study of non-equilibrium plasma kinetics. After verification and validation of the framework, it was used to study two physical phenomena: plasma-assisted combustion and the recently proposed optically-pumped rare gas metastable laser. Global models permeate chemistry and plasma science, relying on spatial averaging to focus attention on the dynamics of reaction networks. Defined by a set of species continuity and energy conservation equations, the required data and constructed systems are conceptually similar across most applications, providing a light platform for exploratory and result-search parameter scan- ning. Unfortunately, it is common practice for custom code to be developed for each application-- an enormous duplication of effort which negatively affects the quality of the software produced. Presented herein, the Python-based Kinetic Global Modeling framework (KGMf) was designed to support all modeling phases: collection and analysis of reaction data, construction of an exportable system of model ODEs, and a platform for interactive evaluation and post-processing analysis. A symbolic ODE system is constructed for interactive manipulation and generation of a Jacobian, both of which are compiled as operation-optimized C-code. Plasma-assisted combustion and ignition (PAC/PAI) embody the modernization of burning fuel by opening up new avenues of control and optimization

  13. Modeling Phosphorous Losses from Seasonal Manure Application Schemes

    Science.gov (United States)

    Menzies, E.; Walter, M. T.

    2015-12-01

    Excess nutrient loading, especially nitrogen and phosphorus, to surface waters is a common and significant problem throughout the United States. While pollution remediation efforts are continuously improving, the most effective treatment remains to limit the source. Appropriate timing of fertilizer application to reduce nutrient losses is currently a hotly debated topic in the Northeastern United States; winter spreading of manure is under special scrutiny. We plan to evaluate the loss of phosphorous to surface waters from agricultural systems under varying seasonal fertilization schemes in an effort to determine the impacts of fertilizers applied throughout the year. The Cayuga Lake basin, located in the Finger Lakes region of New York State, is a watershed dominated by agriculture where a wide array of land management strategies can be found. The evaluation will be conducted on the Fall Creek Watershed, a large sub basin in the Cayuga Lake Watershed. The Fall Creek Watershed covers approximately 33,000 ha in central New York State with approximately 50% of this land being used for agriculture. We plan to use the Soil and Water Assessment Tool (SWAT) to model a number of seasonal fertilization regimes such as summer only spreading and year round spreading (including winter applications), as well as others. We will use the model to quantify the phosphorous load to surface waters from these different fertilization schemes and determine the impacts of manure applied at different times throughout the year. More detailed knowledge about how seasonal fertilization schemes impact phosphorous losses will provide more information to stakeholders concerning the impacts of agriculture on surface water quality. Our results will help farmers and extensionists make more informed decisions about appropriate timing of manure application for reduced phosphorous losses and surface water degradation as well as aid law makers in improving policy surrounding manure application.

  14. Supply chain management models, applications, and research directions

    CERN Document Server

    Pardalos, Panos; Romeijn, H

    2005-01-01

    This work brings together some of the most up to date research in the application of operations research and mathematical modeling te- niques to problems arising in supply chain management and e-Commerce. While research in the broad area of supply chain management enc- passes a wide range of topics and methodologies, we believe this book provides a good snapshot of current quantitative modeling approaches, issues, and trends within the field. Each chapter is a self-contained study of a timely and relevant research problem in supply chain mana- ment. The individual works place a heavy emphasis on the application of modeling techniques to real world management problems. In many instances, the actual results from applying these techniques in practice are highlighted. In addition, each chapter provides important mana- rial insights that apply to general supply chain management practice. The book is divided into three parts. The first part contains ch- ters that address the new and rapidly growing role of the inte...

  15. Bilayer Graphene Application on NO2 Sensor Modelling

    Directory of Open Access Journals (Sweden)

    Elnaz Akbari

    2014-01-01

    Full Text Available Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using graphene to achieve accuracy and higher speed in detection devices utilized in cases where gas sensing is required. Although there are plenty of experimental studies in this field, the lack of analytical models is felt deeply. To start with modelling, the field effect transistor- (FET- based structure has been chosen to serve as the platform and bilayer graphene density of state variation effect by NO2 injection has been discussed. The chemical reaction between graphene and gas creates new carriers in graphene which cause density changes and eventually cause changes in the carrier velocity. In the presence of NO2 gas, electrons are donated to the FET channel which is employed as a sensing mechanism. In order to evaluate the accuracy of the proposed models, the results obtained are compared with the existing experimental data and acceptable agreement is reported.

  16. Applicability of dual-route reading models to Spanish.

    Science.gov (United States)

    Ardila, Alfredo; Cuetos, Fernando

    2016-01-01

    Two opposing points of view have been presented with regard to the applicability of the dual-route reading models Spanish. Some authors maintain that, given the transparency of the reading system, non-lexical reading is the strategy followed predominantly by Spanish readers and for that reason these models are not appropriate to explain alexias (acquired dyslexias) in Spanish. Other authors, consider that since several cases of phonological, surface and deep alexia have been reported, dual-route reading models are applicable to Spanish in the same way that to the irregular writing systems. In order to contrast these two points of view, an analysis of the two main factors that influence the reading is made: characteristics of the Spanish orthography and characteristics of the Spanish readers. It is conclude that, (1) Due to its transparency, non-lexical reading represents -as in other transparent orthographies-- the initial reading strategy in Spanish; (2) the "reading threshold" (i.e., time required to become literate) is lower in Spanish because there are no irregular words to learn; (3) as reading experience increases, speed increases and lexical reading becomes used more; (4) Given the characteristics of the Spanish reading system, it is understandable that frequency of deep dyslexia is so low.

  17. The determination of the most applicable PWV model for Turkey

    Science.gov (United States)

    Deniz, Ilke; Gurbuz, Gokhan; Mekik, Cetin

    2016-07-01

    Water vapor is a key component for modelling atmosphere and climate studies. Moreover, long-term water vapor changes can be an independent source for detecting climate changes. Since Global Navigation Satellite Systems (GNSS) use microwaves passing through the atmosphere, atmospheric effects are modeled with high accuracy. Tropospheric effects on GNSS signals are estimated with total zenith delay parameter (ZTD) which is the sum of hydrostatic (ZHD) and wet zenith delay (ZWD). The first component can be obtained from meteorological observations with high accuracy; the second component, however, can be computed by subtracting ZHD from ZTD (ZWD=ZTD-ZHD). Afterwards, the weighted mean temperature (Tm) or the conversion factor (Q) is used for the conversion between the precipitable water vapor (PWV) and ZWD. The parameters Tm and Q are derived from the analysis of radiosonde stations' profile observations. Numerous Q and Tm models have been developed for each radiosonde station, radiosonde station group, countries and global fields such as Bevis Tm model and Emardson and Derks' Q models. So, PWV models (Tm and Q models) applied for Turkey have been developed using a year of radiosonde data (2011) from 8 radiosonde stations. In this study the models developed are tested by comparing PWVGNSS computed applying Tm and Q models to the ZTD estimates derived by Bernese and GAMIT/GLOBK software at GNSS stations established at Istanbul and Ankara with those from the collocated radiosonde stations (PWVRS) from October 2013 to December 2014 with the data obtained from a project (no 112Y350) supported by the Scientific and Technological Research Council of Turkey (TUBITAK). The comparison results show that PWVGNSS and PWVRS are in high correlation (86 % for Ankara and 90% for Istanbul). Thus, the most applicable model for Turkey and the accuracy of GNSS meteorology are investigated. In addition, Tm model was applied to the ZTD estimates of 20 TUSAGA-Active (CORS-TR) stations in

  18. Building energy modeling for green architecture and intelligent dashboard applications

    Science.gov (United States)

    DeBlois, Justin

    Buildings are responsible for 40% of the carbon emissions in the United States. Energy efficiency in this sector is key to reducing overall greenhouse gas emissions. This work studied the passive technique called the roof solar chimney for reducing the cooling load in homes architecturally. Three models of the chimney were created: a zonal building energy model, computational fluid dynamics model, and numerical analytic model. The study estimated the error introduced to the building energy model (BEM) through key assumptions, and then used a sensitivity analysis to examine the impact on the model outputs. The conclusion was that the error in the building energy model is small enough to use it for building simulation reliably. Further studies simulated the roof solar chimney in a whole building, integrated into one side of the roof. Comparisons were made between high and low efficiency constructions, and three ventilation strategies. The results showed that in four US climates, the roof solar chimney results in significant cooling load energy savings of up to 90%. After developing this new method for the small scale representation of a passive architecture technique in BEM, the study expanded the scope to address a fundamental issue in modeling - the implementation of the uncertainty from and improvement of occupant behavior. This is believed to be one of the weakest links in both accurate modeling and proper, energy efficient building operation. A calibrated model of the Mascaro Center for Sustainable Innovation's LEED Gold, 3,400 m2 building was created. Then algorithms were developed for integration to the building's dashboard application that show the occupant the energy savings for a variety of behaviors in real time. An approach using neural networks to act on real-time building automation system data was found to be the most accurate and efficient way to predict the current energy savings for each scenario. A stochastic study examined the impact of the

  19. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  20. Application of a mathematical model for ergonomics in lean manufacturing.

    Science.gov (United States)

    Botti, Lucia; Mora, Cristina; Regattieri, Alberto

    2017-10-01

    The data presented in this article are related to the research article "Integrating ergonomics and lean manufacturing principles in a hybrid assembly line" (Botti et al., 2017) [1]. The results refer to the application of the mathematical model for the design of lean processes in hybrid assembly lines, meeting both the lean principles and the ergonomic requirements for safe assembly work. Data show that the success of a lean strategy is possible when ergonomics of workers is a parameter of the assembly process design.

  1. Modeling & imaging of bioelectrical activity principles and applications

    CERN Document Server

    He, Bin

    2010-01-01

    Over the past several decades, much progress has been made in understanding the mechanisms of electrical activity in biological tissues and systems, and for developing non-invasive functional imaging technologies to aid clinical diagnosis of dysfunction in the human body. The book will provide full basic coverage of the fundamentals of modeling of electrical activity in various human organs, such as heart and brain. It will include details of bioelectromagnetic measurements and source imaging technologies, as well as biomedical applications. The book will review the latest trends in

  2. Application of a mathematical model for ergonomics in lean manufacturing

    Directory of Open Access Journals (Sweden)

    Lucia Botti

    2017-10-01

    Full Text Available The data presented in this article are related to the research article “Integrating ergonomics and lean manufacturing principles in a hybrid assembly line” (Botti et al., 2017 [1]. The results refer to the application of the mathematical model for the design of lean processes in hybrid assembly lines, meeting both the lean principles and the ergonomic requirements for safe assembly work. Data show that the success of a lean strategy is possible when ergonomics of workers is a parameter of the assembly process design.

  3. Cognitive interference modeling with applications in power and admission control

    KAUST Repository

    Mahmood, Nurul Huda

    2012-10-01

    One of the key design challenges in a cognitive radio network is controlling the interference generated at coexisting primary receivers. In order to design efficient cognitive radio systems and to minimize their unwanted consequences, it is therefore necessary to effectively control the secondary interference at the primary receivers. In this paper, a generalized framework for the interference analysis of a cognitive radio network where the different secondary transmitters may transmit with different powers and transmission probabilities, is presented and various applications of this interference model are demonstrated. The findings of the analytical performance analyses are confirmed through selected computer-based Monte-Carlo simulations. © 2012 IEEE.

  4. Applications of amorphous track models in radiation biology

    Science.gov (United States)

    Cucinotta, F. A.; Nikjoo, H.; Goodhead, D. T.; Wilson, J. W. (Principal Investigator)

    1999-01-01

    The average or amorphous track model uses the response of a system to gamma-rays and the radial distribution of dose about an ion's path to describe survival and other cellular endpoints from proton, heavy ion, and neutron irradiation. This model has been used for over 30 years to successfully fit many radiobiology data sets. We review several extensions of this approach that address objections to the original model, and consider applications of interest in radiobiology and space radiation risk assessment. In the light of present views of important cellular targets, the role of target size as manifested through the relative contributions from ion-kill (intra-track) and gamma-kill (inter-track) remains a critical question in understanding the success of the amorphous track model. Several variations of the amorphous model are discussed, including ones that consider the radial distribution of event-sizes rather than average electron dose, damage clusters rather than multiple targets, and a role for repair or damage processing.

  5. Applications of amorphous track models in radiation biology

    Energy Technology Data Exchange (ETDEWEB)

    Cucinotta, F.A. [National Aeronautics and Space Administration, Houston, TX (United States). Lyndon B. Johnson Space Center; Nikjoo, H.; Goodhead, D.T. [Medical Research Council, Harwell (United Kingdom). Radiation and Genome Stability Unit

    1999-07-01

    The average or amorphous track model uses the response of a system to gamma-rays and the radial distribution of dose about an ion`s path to describe survival and other cellular endpoints from proton, heavy ion, and neutron irradiation. This model has been used for over 30 years to successfully fit many radiobiology data sets. We review several extensions of this approach that address objections to the original model, and consider applications of interest in radiobiology and space radiation risk assessment. In the light of present views of important cellular targets, the role of target size as manifested through the relative contributions from ion-kill (intra-track) and gamma-kill (inter-track) remains a critical question in understanding the success of the amorphous track model. Several variations of the amorphous model are discussed, including ones that consider the radial distribution of event-sizes rather than average electron dose, damage clusters rather than multiple targets, and a role for repair or damage processing. (orig.)

  6. Application of the ACASA model for urban development studies

    Science.gov (United States)

    Marras, S.; Pyles, R. D.; Falk, M.; Snyder, R. L.; Paw U, K. T.; Blecic, I.; Trunfio, G. A.; Cecchini, A.; Spano, D.

    2012-04-01

    Since urban population is growing fast and urban areas are recognized as the major source of CO2 emissions, more attention has being dedicated to the topic of urban sustainability and its connection with the climate. Urban flows of energy, water and carbon have an important impact on climate change and their quantification is pivotal in the city design and management. Large effort has been devoted to quantitative estimates of the urban metabolism components, and several advanced models have been developed and used at different spatial and temporal scales for this purpose. However, it is necessary to develop suitable tools and indicators to effectively support urban planning and management with the goal of achieving a more sustainable metabolism in the urban environment. In this study, the multilayer model ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) was chosen to simulate the exchanges of heat, water vapour and CO2 within and above urban canopy. After several calibration and evaluation tests over natural and agricultural ecosystems, the model was recently modified for application in urban and peri-urban areas. New equations to account for the anthropogenic contribution to heat exchange and carbon production, as well as key parameterizations of leaf-facet scale interactions to separate both biogenic and anthropogenic flux sources and sinks, were added to test changes in land use or urban planning strategies. The analysis was based on the evaluation of the ACASA model performance in estimating urban metabolism components at local scale. Simulated sensible heat, latent heat, and carbon fluxes were compared with in situ Eddy Covariance measurements collected in the city centre of Florence (Italy). Statistical analysis was performed to test the model accuracy and reliability. Model sensitivity to soil types and increased population density values was conducted to investigate the potential use of ACASA for evaluating the impact of planning alternative scenarios. In

  7. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  8. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D

    2017-01-01

    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  9. Applications of the International Space Station Probabilistic Risk Assessment Model

    Science.gov (United States)

    Grant, Warren; Lutomski, Michael G.

    2011-01-01

    Recently the International Space Station (ISS) has incorporated more Probabilistic Risk Assessments (PRAs) in the decision making process for significant issues. Future PRAs will have major impact to ISS and future spacecraft development and operations. These PRAs will have their foundation in the current complete ISS PRA model and the current PRA trade studies that are being analyzed as requested by ISS Program stakeholders. ISS PRAs have recently helped in the decision making process for determining reliability requirements for future NASA spacecraft and commercial spacecraft, making crew rescue decisions, as well as making operational requirements for ISS orbital orientation, planning Extravehicular activities (EVAs) and robotic operations. This paper will describe some applications of the ISS PRA model and how they impacted the final decision. This paper will discuss future analysis topics such as life extension, requirements of new commercial vehicles visiting ISS.

  10. Joint Dynamics Modeling and Parameter Identification for Space Robot Applications

    Directory of Open Access Journals (Sweden)

    Adenilson R. da Silva

    2007-01-01

    Full Text Available Long-term mission identification and model validation for in-flight manipulator control system in almost zero gravity with hostile space environment are extremely important for robotic applications. In this paper, a robot joint mathematical model is developed where several nonlinearities have been taken into account. In order to identify all the required system parameters, an integrated identification strategy is derived. This strategy makes use of a robust version of least-squares procedure (LS for getting the initial conditions and a general nonlinear optimization method (MCS—multilevel coordinate search—algorithm to estimate the nonlinear parameters. The approach is applied to the intelligent robot joint (IRJ experiment that was developed at DLR for utilization opportunity on the International Space Station (ISS. The results using real and simulated measurements have shown that the developed algorithm and strategy have remarkable features in identifying all the parameters with good accuracy.

  11. Modelling of a cross flow evaporator for CSP application

    DEFF Research Database (Denmark)

    Sørensen, Kim; Franco, Alessandro; Pelagotti, Leonardo

    2016-01-01

    for a coil type steam generator specifically designed for solar applications, this paper analyzes the use of several heat transfer, void fraction and pressure drop correlations for the modelling the operation of such a type of steam generator. The paper after a brief review of the literature about...... the available correlations for the definition of two-phase flow heat transfer, void fraction and pressure drop in connection with the operation of steam generators, focuses attention on a comparison of the results obtained using several different models resulting by different combination of correlations....... The influence on the analysis of the performance of the evaporator, their impact on significant design variables and the effective lifetime of critical components in different operating conditions, simulating the daily start-up procedures of the steam generator is evaluated. The importance of a good calibration...

  12. Klaim-DB: A Modeling Language for Distributed Database Applications

    DEFF Research Database (Denmark)

    Wu, Xi; Li, Ximeng; Lluch Lafuente, Alberto

    2015-01-01

    We present the modelling language, Klaim-DB, for distributed database applications. Klaim-DB borrows the distributed nets of the coordination language Klaim but essentially re-incarnates the tuple spaces of Klaim as databases, and provides high-level language abstractions for the access...... and manipulation of structured data, with integrity and atomicity considerations. We present the formal semantics of KlaimDB and illustrate the use of the language in a scenario where the sales from different branches of a chain of department stores are aggregated from their local databases. It can be seen...... that raising the abstraction level and encapsulating integrity checks (concerning the schema of tables, etc.) in the language primitives for database operations benefit the modelling task considerably....

  13. Application of data envelopment analysis models in supply chain management

    DEFF Research Database (Denmark)

    Soheilirad, Somayeh; Govindan, Kannan; Mardani, Abbas

    2017-01-01

    Supply chain management aims to designing, managing and coordinating material/product, information and financial flows to fulfill the customer requirements at low costs and thereby increasing supply chain profitability. In the last decades, data envelopment analysis has become the main topic...... of interest as a mathematical tool to evaluate supply chain management. While, various data envelopment analysis models have been suggested to measure and evaluate the supply chain management, there is a lack of research regarding to systematic literature review and classification of study in this field...... have been attained to reach a comprehensive review of data envelopment analysis models in evaluation supply chain management. Consequently, the selected published articles have been categorized by author name, the year of publication, technique, application area, country, scope, data envelopment...

  14. Soft Computing Models in Industrial and Environmental Applications

    CERN Document Server

    Abraham, Ajith; Corchado, Emilio; 7th International Conference, SOCO’12

    2013-01-01

    This volume of Advances in Intelligent and Soft Computing contains accepted papers presented at SOCO 2012, held in the beautiful and historic city of Ostrava (Czech Republic), in September 2012.   Soft computing represents a collection or set of computational techniques in machine learning, computer science and some engineering disciplines, which investigate, simulate, and analyze very complex issues and phenomena.   After a through peer-review process, the SOCO 2012 International Program Committee selected 75 papers which are published in these conference proceedings, and represents an acceptance rate of 38%. In this relevant edition a special emphasis was put on the organization of special sessions. Three special sessions were organized related to relevant topics as: Soft computing models for Control Theory & Applications in Electrical Engineering, Soft computing models for biomedical signals and data processing and Advanced Soft Computing Methods in Computer Vision and Data Processing.   The selecti...

  15. Chemical kinetic modeling of H{sub 2} applications

    Energy Technology Data Exchange (ETDEWEB)

    Marinov, N.M.; Westbrook, C.K.; Cloutman, L.D. [Lawrence Livermore National Lab., CA (United States)] [and others

    1995-09-01

    Work being carried out at LLNL has concentrated on studies of the role of chemical kinetics in a variety of problems related to hydrogen combustion in practical combustion systems, with an emphasis on vehicle propulsion. Use of hydrogen offers significant advantages over fossil fuels, and computer modeling provides advantages when used in concert with experimental studies. Many numerical {open_quotes}experiments{close_quotes} can be carried out quickly and efficiently, reducing the cost and time of system development, and many new and speculative concepts can be screened to identify those with sufficient promise to pursue experimentally. This project uses chemical kinetic and fluid dynamic computational modeling to examine the combustion characteristics of systems burning hydrogen, either as the only fuel or mixed with natural gas. Oxidation kinetics are combined with pollutant formation kinetics, including formation of oxides of nitrogen but also including air toxics in natural gas combustion. We have refined many of the elementary kinetic reaction steps in the detailed reaction mechanism for hydrogen oxidation. To extend the model to pressures characteristic of internal combustion engines, it was necessary to apply theoretical pressure falloff formalisms for several key steps in the reaction mechanism. We have continued development of simplified reaction mechanisms for hydrogen oxidation, we have implemented those mechanisms into multidimensional computational fluid dynamics models, and we have used models of chemistry and fluid dynamics to address selected application problems. At the present time, we are using computed high pressure flame, and auto-ignition data to further refine the simplified kinetics models that are then to be used in multidimensional fluid mechanics models. Detailed kinetics studies have investigated hydrogen flames and ignition of hydrogen behind shock waves, intended to refine the detailed reactions mechanisms.

  16. Land Surface Modeling Applications for Famine Early Warning

    Science.gov (United States)

    McNally, A.; Verdin, J. P.; Peters-Lidard, C. D.; Arsenault, K. R.; Wang, S.; Kumar, S.; Shukla, S.; Funk, C. C.; Pervez, M. S.; Fall, G. M.; Karsten, L. R.

    2015-12-01

    AGU 2015 Fall Meeting Session ID#: 7598 Remote Sensing Applications for Water Resources Management Land Surface Modeling Applications for Famine Early Warning James Verdin, USGS EROS Christa Peters-Lidard, NASA GSFC Amy McNally, NASA GSFC, UMD/ESSIC Kristi Arsenault, NASA GSFC, SAIC Shugong Wang, NASA GSFC, SAIC Sujay Kumar, NASA GSFC, SAIC Shrad Shukla, UCSB Chris Funk, USGS EROS Greg Fall, NOAA Logan Karsten, NOAA, UCAR Famine early warning has traditionally required close monitoring of agro-climatological conditions, putting them in historical context, and projecting them forward to anticipate end-of-season outcomes. In recent years, it has become necessary to factor in the effects of a changing climate as well. There has also been a growing appreciation of the linkage between food security and water availability. In 2009, Famine Early Warning Systems Network (FEWS NET) science partners began developing land surface modeling (LSM) applications to address these needs. With support from the NASA Applied Sciences Program, an instance of the Land Information System (LIS) was developed to specifically support FEWS NET. A simple crop water balance model (GeoWRSI) traditionally used by FEWS NET took its place alongside the Noah land surface model and the latest version of the Variable Infiltration Capacity (VIC) model, and LIS data readers were developed for FEWS NET precipitation forcings (NOAA's RFE and USGS/UCSB's CHIRPS). The resulting system was successfully used to monitor and project soil moisture conditions in the Horn of Africa, foretelling poor crop outcomes in the OND 2013 and MAM 2014 seasons. In parallel, NOAA created another instance of LIS to monitor snow water resources in Afghanistan, which are an early indicator of water availability for irrigation and crop production. These successes have been followed by investment in LSM implementations to track and project water availability in Sub-Saharan Africa and Yemen, work that is now underway. Adoption of

  17. Acoustic Propagation Modeling for Marine Hydro-Kinetic Applications

    Science.gov (United States)

    Johnson, C. N.; Johnson, E.

    2014-12-01

    The combination of riverine, tidal, and wave energy have the potential to supply over one third of the United States' annual electricity demand. However, in order to deploy and test prototypes, and commercial installations, marine hydrokinetic (MHK) devices must meet strict regulatory guidelines that determine the maximum amount of noise that can be generated and sets particular thresholds for determining disturbance and injury caused by noise. An accurate model for predicting the propagation of a MHK source in a real-life hydro-acoustic environment has been established. This model will help promote the growth and viability of marine, water, and hydrokinetic energy by confidently assuring federal regulations are meet and harmful impacts to marine fish and wildlife are minimal. Paracousti, a finite difference solution to the acoustic equations, was originally developed for sound propagation in atmospheric environments and has been successfully validated for a number of different geophysical activities. The three-dimensional numerical implementation is advantageous over other acoustic propagation techniques for a MHK application where the domains of interest have complex 3D interactions from the seabed, banks, and other shallow water effects. A number of different cases for hydro-acoustic environments have been validated by both analytical and numerical results from canonical and benchmark problems. This includes a variety of hydrodynamic and physical environments that may be present in a potential MHK application including shallow and deep water, sloping, and canyon type bottoms, with varying sound speed and density profiles. With the model successfully validated for hydro-acoustic environments more complex and realistic MHK sources from turbines and/or arrays can be modeled.

  18. Open Data in Mobile Applications, New Models for Service Information

    Directory of Open Access Journals (Sweden)

    Manuel GÉRTRUDIX BARRIO

    2016-06-01

    Full Text Available The combination of open data generated by government and the proliferation of mobile devices enables the creation of new information services and improved delivery of existing ones. Significantly, it allows citizens access to simple,quick and effective way to information. Free applications that use open data provide useful information in real time, tailored to the user experience and / or geographic location. This changes the concept of “service information”. Both the infomediary sector and citizens now have new models of production and dissemination of this type of information. From the theoretical contextualization of aspects such as processes datification of reality, mobile registration of everyday experience, or reinterpretation of the service information, we analyze the role of open data in the public sector in Spain and its application concrete in building apps based on this data sets. The findings indicate that this is a phenomenon that will continue to grow because these applications provide useful and efficient information to decision-making in everyday life.

  19. Application of Model Predictive Control to BESS for Microgrid Control

    Directory of Open Access Journals (Sweden)

    Thai-Thanh Nguyen

    2015-08-01

    Full Text Available Battery energy storage systems (BESSs have been widely used for microgrid control. Generally, BESS control systems are based on proportional-integral (PI control techniques with the outer and inner control loops based on PI regulators. Recently, model predictive control (MPC has attracted attention for application to future energy processing and control systems because it can easily deal with multivariable cases, system constraints, and nonlinearities. This study considers the application of MPC-based BESSs to microgrid control. Two types of MPC are presented in this study: MPC based on predictive power control (PPC and MPC based on PI control in the outer and predictive current control (PCC in the inner control loops. In particular, the effective application of MPC for microgrids with multiple BESSs should be considered because of the differences in their control performance. In this study, microgrids with two BESSs based on two MPC techniques are considered as an example. The control performance of the MPC used for the control microgrid is compared to that of the PI control. The proposed control strategy is investigated through simulations using MATLAB/Simulink software. The simulation results show that the response time, power and voltage ripples, and frequency spectrum could be improved significantly by using MPC.

  20. Applications of Skew Models Using Generalized Logistic Distribution

    Directory of Open Access Journals (Sweden)

    Pushpa Narayan Rathie

    2016-04-01

    Full Text Available We use the skew distribution generation procedure proposed by Azzalini [Scand. J. Stat., 1985, 12, 171–178] to create three new probability distribution functions. These models make use of normal, student-t and generalized logistic distribution, see Rathie and Swamee [Technical Research Report No. 07/2006. Department of Statistics, University of Brasilia: Brasilia, Brazil, 2006]. Expressions for the moments about origin are derived. Graphical illustrations are also provided. The distributions derived in this paper can be seen as generalizations of the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. Applications with unimodal and bimodal data are given to illustrate the applicability of the results derived in this paper. The applications include the analysis of the following data sets: (a spending on public education in various countries in 2003; (b total expenditure on health in 2009 in various countries and (c waiting time between eruptions of the Old Faithful Geyser in the Yellow Stone National Park, Wyoming, USA. We compare the fit of the distributions introduced in this paper with the distributions given by Nadarajah and Kotz [Acta Appl. Math., 2006, 91, 1–37]. The results show that our distributions, in general, fit better the data sets. The general R codes for fitting the distributions introduced in this paper are given in Appendix A.

  1. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  2. 78 FR 11858 - Applications for New Awards; Arts in Education Model Development and Dissemination Program

    Science.gov (United States)

    2013-02-20

    ... Applications for New Awards; Arts in Education Model Development and Dissemination Program AGENCY: Office of... Applications. Applications for Grants under the Arts in Education Model Development and Dissemination program... for the Arts in Education Model Development and Dissemination program at www.Grants.gov . You must...

  3. Applications of computational modeling in metabolic engineering of yeast.

    Science.gov (United States)

    Kerkhoven, Eduard J; Lahtvee, Petri-Jaan; Nielsen, Jens

    2015-02-01

    Generally, a microorganism's phenotype can be described by its pattern of metabolic fluxes. Although fluxes cannot be measured directly, inference of fluxes is well established. In biotechnology the aim is often to increase the capacity of specific fluxes. For this, metabolic engineering methods have been developed and applied extensively. Many of these rely on balancing of intracellular metabolites, redox, and energy fluxes, using genome-scale models (GEMs) that in combination with appropriate objective functions and constraints can be used to predict potential gene targets for obtaining a preferred flux distribution. These methods point to strategies for altering gene expression; however, fluxes are often controlled by post-transcriptional events. Moreover, GEMs are usually not taking into account metabolic regulation, thermodynamics and enzyme kinetics. To facilitate metabolic engineering, tools from synthetic biology have emerged, enabling integration and assembly of naturally nonexistent, but well-characterized components into a living organism. To describe these systems kinetic models are often used and to integrate these systems with the standard metabolic engineering approach, it is necessary to expand the modeling of metabolism to consider kinetics of individual processes. This review will give an overview about models available for metabolic engineering of yeast and discusses their applications. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.

  4. Analysis and application of opinion model with multiple topic interactions.

    Science.gov (United States)

    Xiong, Fei; Liu, Yun; Wang, Liang; Wang, Ximeng

    2017-08-01

    To reveal heterogeneous behaviors of opinion evolution in different scenarios, we propose an opinion model with topic interactions. Individual opinions and topic features are represented by a multidimensional vector. We measure an agent's action towards a specific topic by the product of opinion and topic feature. When pairs of agents interact for a topic, their actions are introduced to opinion updates with bounded confidence. Simulation results show that a transition from a disordered state to a consensus state occurs at a critical point of the tolerance threshold, which depends on the opinion dimension. The critical point increases as the dimension of opinions increases. Multiple topics promote opinion interactions and lead to the formation of macroscopic opinion clusters. In addition, more topics accelerate the evolutionary process and weaken the effect of network topology. We use two sets of large-scale real data to evaluate the model, and the results prove its effectiveness in characterizing a real evolutionary process. Our model achieves high performance in individual action prediction and even outperforms state-of-the-art methods. Meanwhile, our model has much smaller computational complexity. This paper provides a demonstration for possible practical applications of theoretical opinion dynamics.

  5. Global Modeling of CO2 Discharges with Aerospace Applications

    Directory of Open Access Journals (Sweden)

    Chloe Berenguer

    2014-01-01

    Full Text Available We developed a global model aiming to study discharges in CO2 under various conditions, pertaining to a large spectrum of pressure, absorbed energy, and feeding values. Various physical conditions and form factors have been investigated. The model was applied to a case of radiofrequency discharge and to helicon type devices functioning in low and high feed conditions. In general, main charged species were found to be CO2+ for sufficiently low pressure cases and O− for higher pressure ones, followed by CO2+, CO+, and O2+ in the latter case. Dominant reaction is dissociation of CO2 resulting into CO production. Electronegativity, important for radiofrequency discharges, increases with pressure, arriving up to 3 for high flow rates for absorbed power of 250 W, and diminishes with increasing absorbed power. Model results pertaining to radiofrequency type plasma discharges are found in satisfactory agreement with those available from an existing experiment. Application to low and high flow rates feedings cases of helicon thruster allowed for evaluation of thruster functioning conditions pertaining to absorbed powers from 50 W to 1.8 kW. The model allows for a detailed evaluation of the CO2 potential to be used as propellant in electric propulsion devices.

  6. Educational and Scientific Applications of Climate Model Diagnostic Analyzer

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.

    2016-12-01

    Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of

  7. Brookhaven Regional Energy Facility Siting Model (REFS): model development and application

    Energy Technology Data Exchange (ETDEWEB)

    Meier, P.; Hobbs, B.; Ketcham, G.; McCoy, M.; Stern, R.

    1979-06-01

    A siting methodology developed specifically to bridge the gap between regional-energy-system scenarios and environmental transport models is documented. Development of the model is described in Chapter 1. Chapter 2 described the basic structure of such a model. Additional chapters on model development cover: generation, transmission, demand disaggregation, the interface to other models, computational aspects, the coal sector, water resource considerations, and air quality considerations. These subjects comprise Part I. Part II, Model Applications, covers: analysis of water resource constraints, water resource issues in the New York Power Pool, water resource issues in the New England Power Pool, water resource issues in the Pennsylvania-Jersey-Maryland Power Pool, and a summary of water resource constraint analysis. (MCW)

  8. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. © 2014, CommonWealth of Australia. Groundwater © 2014, National Ground Water Association.

  9. Micromagnetic Modeling and Analysis for Memory and Processing Applications

    Science.gov (United States)

    Lubarda, Marko V.

    Magnetic nanostructures are vital components of numerous existing and prospective magnetic devices, including hard disk drives, magnetic sensors, and microwave generators. The ability to examine and predict the behavior of magnetic nanostructures is essential for improving existing devices and exploring new technologies and areas of application. This thesis consists of three parts. In part I, key concepts of magnetism are covered (chapter 1), followed by an introduction to micromagnetics (chapter 2). Key interactions are discussed. The Landau-Lifshitz-Gilbert equation is introduced, and the variational approach of W. F. Brown is presented. Part II is devoted to computational micromagnetics. Interaction energies, fields and torques, introduced in part I, are transcribed from the continuum to their finite element form. The validity of developed models is discussed with reference to physical assumptions and discretization criteria. Chapter 3 introduces finite element modeling, and provides derivations of micromagnetic fields in the linear basis representation. Spin transfer torques are modeled in chapter 4. Thermal effects are included in the computational framework in chapter 5. Chapter 6 discusses an implementation of the nudged elastic band method for the computation of energy barriers. A model accounting for polycrystallinity is developed in chapter 7. The model takes into account the wide variety of distributions and imperfections which characterize true systems. The modeling presented in chapters 3-7 forms a general framework for the computational study of diverse magnetic phenomena in contemporary structures and devices. Chapter 8 concludes part II with an outline of powerful acceleration schemes, which were essential for the large-scale micromagnetic simulations presented in part III. Part III begins with the analysis of the perpendicular magnetic recording system (chapter 9). A simulation study of the recording process with readback analysis is presented

  10. [Application of three compartment model and response surface model to clinical anesthesia using Microsoft Excel].

    Science.gov (United States)

    Abe, Eiji; Abe, Mari

    2011-08-01

    With the spread of total intravenous anesthesia, clinical pharmacology has become more important. We report Microsoft Excel file applying three compartment model and response surface model to clinical anesthesia. On the Microsoft Excel sheet, propofol, remifentanil and fentanyl effect-site concentrations are predicted (three compartment model), and probabilities of no response to prodding, shaking, surrogates of painful stimuli and laryngoscopy are calculated using predicted effect-site drug concentration. Time-dependent changes in these calculated values are shown graphically. Recent development in anesthetic drug interaction studies are remarkable, and its application to clinical anesthesia with this Excel file is simple and helpful for clinical anesthesia.

  11. GSTARS computer models and their applications, part I: theoretical development

    Science.gov (United States)

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  12. Exploring the Application of Capital Facility Investment Justification Model

    Directory of Open Access Journals (Sweden)

    Marijan Karić

    2013-07-01

    Full Text Available For decades now, the models for identifying and quantifying the level of risk of investment projects and investment justification evaluation have been the subject of investigation by members of professional and research communities. It is important to quantify the level of risk because by evaluating investment justification in terms of the risks involved, the decision-maker (investor is able to choose from available alternatives the one that will achieve the most favourable ratio of expected profit to the assumed risk. In this way, the economic entity can raise its productivity, profitability and the quality of business operation in general. The aim of this paper was to investigate the extent to which medium and large companies have been using modern methods of investment justification evaluation in their decision-making process and determine the level of quality of the application of the selected methods in practice. The study was conducted on a sample of medium and large enterprises in the eastern Croatia during 2011 and 2012, and it was established that despite the fact that a large number of modern investment project profitability and risk assessment models have been developed, the level of their application in practice is not high enough. The analyzed investment proposals included only basic methods of capital budgeting without risk assessment. Hence, it was concluded that individual investors were presented with low-quality and incomplete investment justification evaluation results on the basis of which the decisions of key importance for the development of the economic entity as a whole were made. This paper aims to underline the need for financial managers to get informed and educate themselves about contemporary investment project profitability and risk assessment models as well as the need to create educational programmes and computer solutions that will encourage key people in companies to acquire new knowledge and apply modern

  13. Soil erosion by water - model concepts and application

    Science.gov (United States)

    Schmidt, Juergen

    2010-05-01

    approaches will be discussed taking account of the models WEPP, EUROSEM, IISEM and EROSION 3D. In order to provide a better representation of spatially heterogeneous catchments in terms of landuse, soil, slope, and rainfall most of recently developed models operate on a grid-cell basis or other kinds of sub-units, each having uniform characteristics. These so-called "Distributed Models" accepts inputs from raster based geographic information system (GIS). The cell-based structure of the models also allows to generate drainage paths by which water and sediment can be routed from the top to the bottom of the respective watershed. One of the open problems in soil erosion modelling refers to the spontaneous generation of erosion rills without the need for pre-existing morphological contours. A promising approach to handle this problem was realized first in the RILLGROW model, which uses a cellular automaton system in order to generate realistic rill patterns. With respect to the above mentioned models selected applications will be presented and discussed regarding their usability for soil and water conservation purposes.

  14. Mass and charge conservation check in dynamic models: application to the new ADM1 model.

    Science.gov (United States)

    de Gracia, M; Sancho, L; García-Heras, J L; Vanrolleghem, P; Ayesa, E

    2006-01-01

    This paper proposes a systematic methodology for the analysis of the mass and charge balances in dynamic models expressed using the Petersen matrix notation. This methodology is based on the definition of the model components via elemental mass fractions and in the estimation of the COD as a function of the redox equations associated with these elements. This approach makes the automatic calculation of all the stoichiometric coefficients under different measuring units and the study of COD, charge or mass fluxes easier. As an example of its application this methodology was applied to the ADM1 in order to illustrate its usefulness for the analysis of organic matter characterisation, nitrogen release or biogas composition in anaerobic digestion. The application of the methodology for a rigorous integration of different IWA models is proposed for further study.

  15. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  16. Toward modeling locomotion using electromyography-informed 3D models: application to cerebral palsy.

    Science.gov (United States)

    Sartori, M; Fernandez, J W; Modenese, L; Carty, C P; Barber, L A; Oberhofer, K; Zhang, J; Handsfield, G G; Stott, N S; Besier, T F; Farina, D; Lloyd, D G

    2017-03-01

    This position paper proposes a modeling pipeline to develop clinically relevant neuromusculoskeletal models to understand and treat complex neurological disorders. Although applicable to a variety of neurological conditions, we provide direct pipeline applicative examples in the context of cerebral palsy (CP). This paper highlights technologies in: (1) patient-specific segmental rigid body models developed from magnetic resonance imaging for use in inverse kinematics and inverse dynamics pipelines; (2) efficient population-based approaches to derive skeletal models and muscle origins/insertions that are useful for population statistics and consistent creation of continuum models; (3) continuum muscle descriptions to account for complex muscle architecture including spatially varying material properties with muscle wrapping; (4) muscle and tendon properties specific to CP; and (5) neural-based electromyography-informed methods for muscle force prediction. This represents a novel modeling pipeline that couples for the first time electromyography extracted features of disrupted neuromuscular behavior with advanced numerical methods for modeling CP-specific musculoskeletal morphology and function. The translation of such pipeline to the clinical level will provide a new class of biomarkers that objectively describe the neuromusculoskeletal determinants of pathological locomotion and complement current clinical assessment techniques, which often rely on subjective judgment. WIREs Syst Biol Med 2017, 9:e1368. doi: 10.1002/wsbm.1368 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  17. Management model application at nested spatial levels in Mediterranean Basins

    Science.gov (United States)

    Lo Porto, Antonio; De Girolamo, Anna Maria; Froebrich, Jochen

    2014-05-01

    In the EU Water Framework Directive (WFD) implementation processes, hydrological and water quality models can be powerful tools that allow to design and test alternative management strategies, as well as judging their general feasibility and acceptance. Although in recent decades several models have been developed, their use in Mediterranean basins, where rivers have a temporary character, is quite complex and there is limited information in literature which can facilitate model applications and result evaluations in this region. The high spatial variability which characterizes rainfall events, soil hydrological properties and land uses of Mediterranean basin makes more difficult to simulate hydrological and water quality in this region than in other Countries. This variability also has several implications in modeling simulations results especially when simulations at different spatial scale are needed for watershed management purpose. It is well known that environmental processes operating at different spatial scale determine diverse impacts on water quality status (hydrological, chemical, ecological). Hence, the development of management strategies have to include both large scale (watershed) and local spatial scales approaches (e.g. stream reach). This paper presents the results of a study which analyzes how the spatial scale affects the results of hydrologic process and water quality of model simulations in a Mediterranean watershed. Several aspects involved in modeling hydrological and water quality processes at different spatial scale for river basin management are investigated including model data requirements, data availability, model results and uncertainty. A hydrologic and water quality model (SWAT) was used to simulate hydrologic processes and water quality at different spatial scales in the Candelaro river basin (Puglia, S-E Italy) and to design management strategies to reach as possible WFD goals. When studying a basin to assess its current status

  18. Three essays on multi-level optimization models and applications

    Science.gov (United States)

    Rahdar, Mohammad

    The general form of a multi-level mathematical programming problem is a set of nested optimization problems, in which each level controls a series of decision variables independently. However, the value of decision variables may also impact the objective function of other levels. A two-level model is called a bilevel model and can be considered as a Stackelberg game with a leader and a follower. The leader anticipates the response of the follower and optimizes its objective function, and then the follower reacts to the leader's action. The multi-level decision-making model has many real-world applications such as government decisions, energy policies, market economy, network design, etc. However, there is a lack of capable algorithms to solve medium and large scale these types of problems. The dissertation is devoted to both theoretical research and applications of multi-level mathematical programming models, which consists of three parts, each in a paper format. The first part studies the renewable energy portfolio under two major renewable energy policies. The potential competition for biomass for the growth of the renewable energy portfolio in the United States and other interactions between two policies over the next twenty years are investigated. This problem mainly has two levels of decision makers: the government/policy makers and biofuel producers/electricity generators/farmers. We focus on the lower-level problem to predict the amount of capacity expansions, fuel production, and power generation. In the second part, we address uncertainty over demand and lead time in a multi-stage mathematical programming problem. We propose a two-stage tri-level optimization model in the concept of rolling horizon approach to reducing the dimensionality of the multi-stage problem. In the third part of the dissertation, we introduce a new branch and bound algorithm to solve bilevel linear programming problems. The total time is reduced by solving a smaller relaxation

  19. Global CLEWs model - A novel application of OSeMOSYS

    Science.gov (United States)

    Avgerinopoulos, Georgios; Pereira Ramos, Eunice; Howells, Mark

    2017-04-01

    Over the past years, studies that analyse Nexus issues from a holistic point of view and not energy, land or water separately have been gaining momentum. This project aims at giving insights into global issues through the application and the analysis of a global scale OSeMOSYS model. The latter -which is based on a fully open and amendable code- has been used successfully in the latest years as it has been the producing fully accessible energy models suitable for capacity building and policy making suggestions. This study develops a CLEWs (climate, land, energy and water) model with the objective of interrogating global challenges (e.g. increasing food demand) and international trade features, with policy priorities on food security, resource efficiency, low-carbon energy and climate change mitigation, water availability and vulnerability to water stress and floods, water quality, biodiversity and ecosystem services. It will for instance assess (i) the impact of water constraints on food security and human development (clean water for human use; industrial and energy water demands), as well as (ii) the impact of climate change on aggravating or relieving water problems.

  20. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  1. Application of a theoretical model to evaluate COPD disease management.

    Science.gov (United States)

    Lemmens, Karin M M; Nieboer, Anna P; Rutten-Van Mölken, Maureen P M H; van Schayck, Constant P; Asin, Javier D; Dirven, Jos A M; Huijsman, Robbert

    2010-03-26

    Disease management programmes are heterogeneous in nature and often lack a theoretical basis. An evaluation model has been developed in which theoretically driven inquiries link disease management interventions to outcomes. The aim of this study is to methodically evaluate the impact of a disease management programme for patients with chronic obstructive pulmonary disease (COPD) on process, intermediate and final outcomes of care in a general practice setting. A quasi-experimental research was performed with 12-months follow-up of 189 COPD patients in primary care in the Netherlands. The programme included patient education, protocolised assessment and treatment of COPD, structural follow-up and coordination by practice nurses at 3, 6 and 12 months. Data on intermediate outcomes (knowledge, psychosocial mediators, self-efficacy and behaviour) and final outcomes (dyspnoea, quality of life, measured by the CRQ and CCQ, and patient experiences) were obtained from questionnaires and electronic registries. Implementation of the programme was associated with significant improvements in dyspnoea (p model showed associations between significantly improved intermediate outcomes and improvements in quality of life and dyspnoea. The application of a theory-driven model enhances the design and evaluation of disease management programmes aimed at improving health outcomes. This study supports the notion that a theoretical approach strengthens the evaluation designs of complex interventions. Moreover, it provides prudent evidence that the implementation of COPD disease management programmes can positively influence outcomes of care.

  2. Can We Trust Computational Modeling for Medical Applications?

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Myers, Jerry

    2015-01-01

    Operations in extreme environments such as spaceflight pose human health risks that are currently not well understood and potentially unanticipated. In addition, there are limited clinical and research data to inform development and implementation of therapeutics for these unique health risks. In this light, NASA's Human Research Program (HRP) is leveraging biomedical computational models and simulations (M&S) to help inform, predict, assess and mitigate spaceflight health and performance risks, and enhance countermeasure development. To ensure that these M&S can be applied with confidence to the space environment, it is imperative to incorporate a rigorous verification, validation and credibility assessment (VV&C) processes to ensure that the computational tools are sufficiently reliable to answer questions within their intended use domain. In this presentation, we will discuss how NASA's Integrated Medical Model (IMM) and Digital Astronaut Project (DAP) have successfully adapted NASA's Standard for Models and Simulations, NASA-STD-7009 (7009) to achieve this goal. These VV&C methods are also being leveraged by organization such as the Food and Drug Administration (FDA), National Institute of Health (NIH) and the American Society of Mechanical Engineers (ASME) to establish new M&S VV&C standards and guidelines for healthcare applications. Similarly, we hope to provide some insight to the greater aerospace medicine community on how to develop and implement M&S with sufficient confidence to augment medical research and operations.

  3. Application of Molecular Modeling to Urokinase Inhibitors Development

    Directory of Open Access Journals (Sweden)

    V. B. Sulimov

    2014-01-01

    Full Text Available Urokinase-type plasminogen activator (uPA plays an important role in the regulation of diverse physiologic and pathologic processes. Experimental research has shown that elevated uPA expression is associated with cancer progression, metastasis, and shortened survival in patients, whereas suppression of proteolytic activity of uPA leads to evident decrease of metastasis. Therefore, uPA has been considered as a promising molecular target for development of anticancer drugs. The present study sets out to develop the new selective uPA inhibitors using computer-aided structural based drug design methods. Investigation involves the following stages: computer modeling of the protein active site, development and validation of computer molecular modeling methods: docking (SOL program, postprocessing (DISCORE program, direct generalized docking (FLM program, and the application of the quantum chemical calculations (MOPAC package, search of uPA inhibitors among molecules from databases of ready-made compounds to find new uPA inhibitors, and design of new chemical structures and their optimization and experimental examination. On the basis of known uPA inhibitors and modeling results, 18 new compounds have been designed, calculated using programs mentioned above, synthesized, and tested in vitro. Eight of them display inhibitory activity and two of them display activity about 10 μM.

  4. Computational multiscale modeling of fluids and solids theory and applications

    CERN Document Server

    Steinhauser, Martin Oliver

    2017-01-01

    The idea of the book is to provide a comprehensive overview of computational physics methods and techniques, that are used for materials modeling on different length and time scales. Each chapter first provides an overview of the basic physical principles which are the basis for the numerical and mathematical modeling on the respective length-scale. The book includes the micro-scale, the meso-scale and the macro-scale, and the chapters follow this classification. The book explains in detail many tricks of the trade of some of the most important methods and techniques that are used to simulate materials on the perspective levels of spatial and temporal resolution. Case studies are included to further illustrate some methods or theoretical considerations. Example applications for all techniques are provided, some of which are from the author’s own contributions to some of the research areas. The second edition has been expanded by new sections in computational models on meso/macroscopic scales for ocean and a...

  5. Parallel computer processing and modeling: applications for the ICU

    Science.gov (United States)

    Baxter, Grant; Pranger, L. Alex; Draghic, Nicole; Sims, Nathaniel M.; Wiesmann, William P.

    2003-07-01

    Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

  6. Modelling of dielectric polymers for energy scavenging applications

    Science.gov (United States)

    Jean-Mistral, C.; Basrour, S.; Chaillout, J.-J.

    2010-10-01

    An increasing number of scavenging applications use dielectric polymers: for instance, on the heel of a shoe, behind the knee, on a navy buoy, etc. This emerging technology has the potential to be an alternative to traditional, well-known solutions using piezoelectricity or electromagnetism. Indeed, dielectric polymers are suitable for creating flexible and innovative structures working in a quasi-static range. Nevertheless, current analytical models of dielectric polymers in generator mode are too simple and not sufficiently predictive. This paper reports a more reliable method for modelling dielectric generators. This method is a tool for designing any plane structure. It can be used to calculate performance or to optimize a given structure. Moreover, it is modular and can be adapted to any kind of dielectric material and any plane structure. The method is illustrated on a biaxial plane generator comprising 3M's VHB 4910 polymer and conductive silver grease electrodes. Experiment data are provided to validate the analytical model and thus the whole method.

  7. A novel modular multilevel converter modelling technique based on semi-analytical models for HVDC application

    Directory of Open Access Journals (Sweden)

    Ahmed Zama

    2016-12-01

    Full Text Available Thanks to scalability, performance and efficiency, the Modular Multilevel Converter (MMC, since its invention, becomes an attractive topology in industrial applications such as high voltage direct current (HVDC transmission system. However, modelling challenges related to the high number of switching elements in the MMC are highlighted when such systems are integrated into large simulated networks for stability or protection algorithms testing. In this work, a novel dynamic models for MMC is proposed. The proposed models are intended to simplify modeling challenges related to the high number of switching elements in the MMC. The models can be easily used to simulate the converter for stability analysis or protection algorithms for HVDC grids.

  8. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  9. Stochastic Model Predictive Control with Applications in Smart Energy Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Edlund, Kristian; Mølbak, Tommy

    2012-01-01

    to cover more than 50% of the total consumption by 2050. Energy systems based on significant amounts of renewable energy sources are subject to uncertainties. To accommodate the need for model predictive control (MPC) of such systems, the effect of the stochastic effects on the constraints must...... function). This is convenient for energy systems, since some constraints are very important to satisfy with a high probability, whereas violation of others are less prone to have a large economic penalty. In MPC applications the control action is obtained by solving an optimization problem at each sampling......, we show that tailored interior point algorithms are well suited to handle this type of problems. Namely, by utilizing structure-exploiting methods, we implement a special-purpose solver for control of smart energy systems. The solver is compared against general-purpose implementations. As a case...

  10. Urban design and modeling: applications and perspectives on GIS

    Directory of Open Access Journals (Sweden)

    Roberto Mingucci

    2013-05-01

    Full Text Available In recent years, GIS systems have evolved because of technological advancements that make possible the simultaneous management of multiple amount of information.Interesting aspects in their application concern the site documentation at the territorial scale taking advantage of CAD/BIM systems, usually working at the building scale instead.In this sense, the survey using sophisticated equipment such as laser scanners or UAV drones quickly captures data that can be enjoyed across even through new “mobile” technologies, operating in the web-based information systems context. This paper aims to investigate use and perspectives pertaining to geographic information technologies, analysis and design tools meant for modeling at different scales, referring to results of research experiences conducted at the University of Bologna.

  11. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  12. Application Isssues of the Semi-Markov Reliability Model

    Directory of Open Access Journals (Sweden)

    Rudnicki Jacek

    2015-01-01

    Full Text Available Predicting the reliability of marine internal combustion engines, for instance, is of particular importance, as it makes it possible to predict their future reliability states based on the information on the past states. Correct reliability prediction is a complex process which consists in processing empirical results obtained from operating practice, complemented by analytical considerations. The process of technical state changes of each mechanical device is stochastic and continuous in states and time, hence the need to divide this infinite set of engine states into a finite number of subsets (classes, which can be clearly and permanently identified using the existing diagnosing system. Using the engine piston-crankshaft system as an example, the article presents a proposal for a mathematical model of reliability which, on the one hand, takes into account random nature of phenomena leading to the damage, and at the same time reveals certain application flexibility and the resultant practical usability.

  13. Mathematical and numerical modeling in porous media applications in geosciences

    CERN Document Server

    Diaz Viera, Martin A; Coronado, Manuel; Ortiz Tapia, Arturo

    2012-01-01

    Porous media are broadly found in nature and their study is of high relevance in our present lives. In geosciences porous media research is fundamental in applications to aquifers, mineral mines, contaminant transport, soil remediation, waste storage, oil recovery and geothermal energy deposits. Despite their importance, there is as yet no complete understanding of the physical processes involved in fluid flow and transport. This fact can be attributed to the complexity of the phenomena which include multicomponent fluids, multiphasic flow and rock-fluid interactions. Since its formulation in 1856, Darcy's law has been generalized to describe multi-phase compressible fluid flow through anisotropic and heterogeneous porous and fractured rocks. Due to the scarcity of information, a high degree of uncertainty on the porous medium properties is commonly present. Contributions to the knowledge of modeling flow and transport, as well as to the characterization of porous media at field scale are of great relevance. ...

  14. Low Dimensional Semiconductor Structures Characterization, Modeling and Applications

    CERN Document Server

    Horing, Norman

    2013-01-01

    Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included h...

  15. Modelling of gecko foot for future robot application

    Science.gov (United States)

    Kamaruddin, A.; Ong, N. R.; Aziz, M. H. A.; Alcain, J. B.; Haimi, W. M. W. N.; Sauli, Z.

    2017-09-01

    Every gecko has an approximately million microscale hairs called setae which made it easy for them to cling from different surfaces at any orientation with the aid of Van der Waals force as the primary mechanism used to adhere to any contact surfaces. In this paper, a strain simulation using Comsol Multiphysic Software was conducted on a 3D MEMS model of an actuated gecko foot with the aim of achieving optimal sticking with various polymetric materials for future robots application. Based on the stress and strain analyses done on the seven different polymers, it was found that polysilicon had the best result which was nearest to 0%, indicating the strongest elasticity among the others. PDMS on the hand, failed in the simulation due to its bulk-like nature. Thus, PDMS was not suitable to be used for further study on gecko foot robot.

  16. Application of the evolution theory in modelling of innovation diffusion

    Directory of Open Access Journals (Sweden)

    Krstić Milan

    2016-01-01

    Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.

  17. Potential biodefense model applications for portable chlorine dioxide gas production.

    Science.gov (United States)

    Stubblefield, Jeannie M; Newsome, Anthony L

    2015-01-01

    Development of decontamination methods and strategies to address potential infectious disease outbreaks and bioterrorism events are pertinent to this nation's biodefense strategies and general biosecurity. Chlorine dioxide (ClO2) gas has a history of use as a decontamination agent in response to an act of bioterrorism. However, the more widespread use of ClO2 gas to meet current and unforeseen decontamination needs has been hampered because the gas is too unstable for shipment and must be prepared at the application site. Newer technology allows for easy, onsite gas generation without the need for dedicated equipment, electricity, water, or personnel with advanced training. In a laboratory model system, 2 unique applications (personal protective equipment [PPE] and animal skin) were investigated in the context of potential development of decontamination protocols. Such protocols could serve to reduce human exposure to bacteria in a decontamination response effort. Chlorine dioxide gas was capable of reducing (2-7 logs of vegetative and spore-forming bacteria), and in some instances eliminating, culturable bacteria from difficult to clean areas on PPE facepieces. The gas was effective in eliminating naturally occurring bacteria on animal skin and also on skin inoculated with Bacillus spores. The culturable bacteria, including Bacillus spores, were eliminated in a time- and dose-dependent manner. Results of these studies suggested portable, easily used ClO2 gas generation systems have excellent potential for protocol development to contribute to biodefense strategies and decontamination responses to infectious disease outbreaks or other biothreat events.

  18. Modelling and Designing Cryogenic Hydrogen Tanks for Future Aircraft Applications

    Directory of Open Access Journals (Sweden)

    Christopher Winnefeld

    2018-01-01

    Full Text Available In the near future, the challenges to reduce the economic and social dependency on fossil fuels must be faced increasingly. A sustainable and efficient energy supply based on renewable energies enables large-scale applications of electro-fuels for, e.g., the transport sector. The high gravimetric energy density makes liquefied hydrogen a reasonable candidate for energy storage in a light-weight application, such as aviation. Current aircraft structures are designed to accommodate jet fuel and gas turbines allowing a limited retrofitting only. New designs, such as the blended-wing-body, enable a more flexible integration of new storage technologies and energy converters, e.g., cryogenic hydrogen tanks and fuel cells. Against this background, a tank-design model is formulated, which considers geometrical, mechanical and thermal aspects, as well as specific mission profiles while considering a power supply by a fuel cell. This design approach enables the determination of required tank mass and storage density, respectively. A new evaluation value is defined including the vented hydrogen mass throughout the flight enabling more transparent insights on mass shares. Subsequently, a systematic approach in tank partitioning leads to associated compromises regarding the tank weight. The analysis shows that cryogenic hydrogen tanks are highly competitive with kerosene tanks in terms of overall mass, which is further improved by the use of a fuel cell.

  19. Modeling Movement Primitives with Hidden Markov Models for Robotic and Biomedical Applications.

    Science.gov (United States)

    Karg, Michelle; Kulić, Dana

    2017-01-01

    Movement primitives are elementary motion units and can be combined sequentially or simultaneously to compose more complex movement sequences. A movement primitive timeseries consist of a sequence of motion phases. This progression through a set of motion phases can be modeled by Hidden Markov Models (HMMs). HMMs are stochastic processes that model time series data as the evolution of a hidden state variable through a discrete set of possible values, where each state value is associated with an observation (emission) probability. Each motion phase is represented by one of the hidden states and the sequential order by their transition probabilities. The observations of the MP-HMM are the sensor measurements of the human movement, for example, motion capture or inertial measurements. The emission probabilities are modeled as Gaussians. In this chapter, the MP-HMM modeling framework is described and applications to motion recognition and motion performance assessment are discussed. The selected applications include parametric MP-HMMs for explicitly modeling variability in movement performance and the comparison of MP-HMMs based on the loglikelihood, the Kullback-Leibler divergence, the extended HMM-based F-statistic, and gait-specific reference-based measures.

  20. Parametric Model for Astrophysical Proton-Proton Interactions and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Karlsson, Niklas [KTH Royal Institute of Technology, Stockholm (Sweden)

    2007-01-01

    Observations of gamma-rays have been made from celestial sources such as active galaxies, gamma-ray bursts and supernova remnants as well as the Galactic ridge. The study of gamma rays can provide information about production mechanisms and cosmic-ray acceleration. In the high-energy regime, one of the dominant mechanisms for gamma-ray production is the decay of neutral pions produced in interactions of ultra-relativistic cosmic-ray nuclei and interstellar matter. Presented here is a parametric model for calculations of inclusive cross sections and transverse momentum distributions for secondary particles--gamma rays, e±, ve, $\\bar{v}$e, vμ and $\\bar{μ}$e--produced in proton-proton interactions. This parametric model is derived on the proton-proton interaction model proposed by Kamae et al.; it includes the diffraction dissociation process, Feynman-scaling violation and the logarithmically rising inelastic proton-proton cross section. To improve fidelity to experimental data for lower energies, two baryon resonance excitation processes were added; one representing the Δ(1232) and the other multiple resonances with masses around 1600 MeV/c2. The model predicts the power-law spectral index for all secondary particle to be about 0.05 lower in absolute value than that of the incident proton and their inclusive cross sections to be larger than those predicted by previous models based on the Feynman-scaling hypothesis. The applications of the presented model in astrophysics are plentiful. It has been implemented into the Galprop code to calculate the contribution due to pion decays in the Galactic plane. The model has also been used to estimate the cosmic-ray flux in the Large Magellanic Cloud based on HI, CO and gamma-ray observations. The transverse momentum distributions enable calculations when the proton distribution is anisotropic. It is shown that the gamma-ray spectrum and flux due to a

  1. Modifications and Applications of the HERMES model: June - October 2010

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2010-11-16

    The HERMES (High Explosive Response to MEchanical Stimulus) model has been developed to describe the response of energetic materials to low-velocity mechanical stimulus, referred to as HEVR (High Explosive Violent Response) or BVR (Burn to Violent Reaction). For tests performed with an HMX-based UK explosive, at sample sizes less than 200 g, the response was sometimes an explosion, but was not observed to be a detonation. The distinction between explosion and detonation can be important in assessing the effects of the HE response on nearby structures. A detonation proceeds as a supersonic shock wave supported by the release of energy that accompanies the transition from solid to high-pressure gas. For military high explosives, the shock wave velocity generally exceeds 7 km/s, and the pressure behind the shock wave generally exceeds 30 GPa. A kilogram of explosive would be converted to gas in 10 to 15 microseconds. An HEVR explosion proceeds much more slowly. Much of the explosive remains unreacted after the event. Peak pressures have been measured and calculated at less than 1 GPa, and the time for the portion of the solid that does react to form gas is about a millisecond. The explosion will, however, launch the confinement to a velocity that depends on the confinement mass, the mass of explosive converted, and the time required to form gas products. In many tests, the air blast signal and confinement velocity are comparable to those measured when an amount of explosive equal to that which is converted in an HEVR is deliberately detonated in the comparable confinement. The number of confinement fragments from an HEVR is much less than from the comparable detonation. The HERMES model comprises several submodels including a constitutive model for strength, a model for damage that includes the creation of porosity and surface area through fragmentation, an ignition model, an ignition front propagation model, and a model for burning after ignition. We have used HERMES

  2. X-ray ablation measurements and modeling for ICF applications

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Andrew Thomas [Univ. of California, Berkeley, CA (United States)

    1996-09-01

    X-ray ablation of material from the first wall and other components of an ICF (Inertial Confinement Fusion) chamber is a major threat to the laser final optics. Material condensing on these optics after a shot may cause damage with subsequent laser shots. To ensure the successful operation of the ICF facility, removal rates must be predicted accurately. The goal for this dissertation is to develop an experimentally validated x-ray response model, with particular application to the National Ignition Facility (NIF). Accurate knowledge of the x-ray and debris emissions from ICF targets is a critical first step in the process of predicting the performance of the target chamber system. A number of 1-D numerical simulations of NIF targets have been run to characterize target output in terms of energy, angular distribution, spectrum, and pulse shape. Scaling of output characteristics with variations of both target yield and hohlraum wall thickness are also described. Experiments have been conducted at the Nova laser on the effects of relevant x-ray fluences on various materials. The response was diagnosed using post-shot examinations of the surfaces with scanning electron microscope and atomic force microscope instruments. Judgments were made about the dominant removal mechanisms for each material. Measurements of removal depths were made to provide data for the modeling. The finite difference ablation code developed here (ABLATOR) combines the thermomechanical response of materials to x-rays with models of various removal mechanisms. The former aspect refers to energy deposition in such small characteristic depths (~ micron) that thermal conduction and hydrodynamic motion are significant effects on the nanosecond time scale. The material removal models use the resulting time histories of temperature and pressure-profiles, along with ancillary local conditions, to predict rates of surface vaporization and the onset of conditions that would lead to spallation.

  3. Animal models of osteogenesis imperfecta: applications in clinical research

    Directory of Open Access Journals (Sweden)

    Enderli TA

    2016-09-01

    Full Text Available Tanya A Enderli, Stephanie R Burtch, Jara N Templet, Alessandra Carriero Department of Biomedical Engineering, Florida Institute of Technology, Melbourne, FL, USA Abstract: Osteogenesis imperfecta (OI, commonly known as brittle bone disease, is a genetic disease characterized by extreme bone fragility and consequent skeletal deformities. This connective tissue disorder is caused by mutations in the quality and quantity of the collagen that in turn affect the overall mechanical integrity of the bone, increasing its vulnerability to fracture. Animal models of the disease have played a critical role in the understanding of the pathology and causes of OI and in the investigation of a broad range of clinical therapies for the disease. Currently, at least 20 animal models have been officially recognized to represent the phenotype and biochemistry of the 17 different types of OI in humans. These include mice, dogs, and fish. Here, we describe each of the animal models and the type of OI they represent, and present their application in clinical research for treatments of OI, such as drug therapies (ie, bisphosphonates and sclerostin and mechanical (ie, vibrational loading. In the future, different dosages and lengths of treatment need to be further investigated on different animal models of OI using potentially promising treatments, such as cellular and chaperone therapies. A combination of therapies may also offer a viable treatment regime to improve bone quality and reduce fragility in animals before being introduced into clinical trials for OI patients. Keywords: OI, brittle bone, clinical research, mouse, dog, zebrafish

  4. Application of Physically based landslide susceptibility models in Brazil

    Science.gov (United States)

    Carvalho Vieira, Bianca; Martins, Tiago D.

    2017-04-01

    Shallow landslides and floods are the processes responsible for most material and environmental damages in Brazil. In the last decades, some landslides events induce a high number of deaths (e.g. Over 1000 deaths in one event) and incalculable social and economic losses. Therefore, the prediction of those processes is considered an important tool for land use planning tools. Among different methods the physically based landslide susceptibility models having been widely used in many countries, but in Brazil it is still incipient when compared to other ones, like statistical tools and frequency analyses. Thus, the main objective of this research was to assess the application of some Physically based landslide susceptibility models in Brazil, identifying their main results, the efficiency of susceptibility mapping, parameters used and limitations of the tropical humid environment. In order to achieve that, it was evaluated SHALSTAB, SINMAP and TRIGRS models in some studies in Brazil along with the Geotechnical values, scales, DEM grid resolution and the results based on the analysis of the agreement between predicted susceptibility and the landslide scar's map. Most of the studies in Brazil applied SHALSTAB, SINMAP and to a lesser extent the TRIGRS model. The majority researches are concentrated in the Serra do Mar mountain range, that is a system of escarpments and rugged mountains that extends more than 1,500 km along the southern and southeastern Brazilian coast, and regularly affected by heavy rainfall that generates widespread mass movements. Most part of these studies used conventional topographic maps with scales ranging from 1:2000 to 1:50000 and DEM-grid resolution between 2 and 20m. Regarding the Geotechnical and hydrological values, a few studies use field collected data which could produce more efficient results, as indicated by international literature. Therefore, even though they have enormous potential in the susceptibility mapping, even for comparison

  5. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Active Control Design Applications

    Science.gov (United States)

    Waszak, Martin R.

    1998-01-01

    This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  6. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Application to Flutter Suppression

    Science.gov (United States)

    Waszak, Martin R.

    1996-01-01

    This paper describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind-tunnel model for application to design and analysis of flutter suppression controllers. The model is formed by combining the equations of motion for the BACT wind-tunnel model with actuator models and a model of wind-tunnel turbulence. The primary focus of this paper is the development of the equations of motion from first principles using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated using values for parameters obtained from both experiment and analysis. A unique aspect of the BACT wind-tunnel model is that it has upper- and lower-surface spoilers for active control. Comparisons with experimental frequency responses and other data show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind-tunnel model. The equations of motion developed herein have been used to assist the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  7. Modeling Markov switching ARMA-GARCH neural networks models and an application to forecasting stock returns.

    Science.gov (United States)

    Bildirici, Melike; Ersin, Özgür

    2014-01-01

    The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100). Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray's MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray's MS-GARCH model. Therefore, the models are promising for various economic applications.

  8. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    Directory of Open Access Journals (Sweden)

    Melike Bildirici

    2014-01-01

    Full Text Available The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100. Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray’s MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray’s MS-GARCH model. Therefore, the models are promising for various economic applications.

  9. Application of GPS Measurements for Ionospheric and Tropospheric Modelling

    Science.gov (United States)

    Rajendra Prasad, P.; Abdu, M. A.; Furlan, Benedito. M. P.; Koiti Kuga, Hélio

    solar maximum period. In the equatorial region the irregularity structures are highly elongated in the north-south direction and are discrete in the east-west direction with dimensions of several hundred km. With such spatial distribution of irregularities needs to determine how often the GPS receivers fails to provide navigation aid with the available constellation. The effects of scintillation on the performance of GPS navigation systems in the equatorial region can be analyzed through commissioning few ground receivers. Incidentally there are few GPS receivers near these latitudes. Despite the recent advances in the ionosphere and tropospheric delay modeling for geodetic applications of GPS, the models currently used are not very precise. The conventional and operational ionosphere models viz. Klobuchar, Bent, and IRI models have certain limitations in providing very precise accuracies at all latitudes. The troposphere delay modeling also suffers in accuracy. The advances made in both computing power and knowledge of the atmosphere leads to make an effort to upgrade some of these models for improving delay corrections in GPS navigation. The ionospheric group delay corrections for orbit determination can be minimized using duel frequency. However in single frequency measurements the group delay correction is an involved task. In this paper an investigation is carried out to estimate the model coefficients of ionosphere along with precise orbit determination modeling using GPS measurements. The locations of the ground-based receivers near equator are known very exactly. Measurements from these ground stations to a precisely known satellite carrying duel receiver is used for orbit determination. The ionosphere model parameters can be refined corresponding to spatially distributed GPS receivers spread over Brazil. The tropospheric delay effects are not significant for the satellites by choosing appropriate elevation angle. However it needs to be analyzed for user like

  10. Identification of periodic autoregressive moving average models and their application to the modeling of river flows

    Science.gov (United States)

    Tesfaye, Yonas Gebeyehu; Meerschaert, Mark M.; Anderson, Paul L.

    2006-01-01

    The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is useful for the planning, design, and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. This article develops model identification and simulation techniques based on a periodic autoregressive moving average (PARMA) model to capture the seasonal variations in river flow statistics. The innovations algorithm is used to obtain parameter estimates. An application to monthly flow data for the Fraser River in British Columbia is included. A careful statistical analysis of the PARMA model residuals, including a truncated Pareto model for the extreme tails, produces a realistic simulation of these river flows.

  11. Robust optimization modelling with applications to industry and environmental problems

    Science.gov (United States)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  12. Simultaneous Clustering and Model Selection: Algorithm, Theory and Applications.

    Science.gov (United States)

    Li, Zhuwen; Cheong, Loong-Fah; Yang, Shuoguang; Toh, Kim-Chuan

    2017-08-14

    While clustering has been well studied in the past decade, model selection has drawn much less attention due to the difficulty of the problem. In this paper, we address both problems in a joint manner by recovering an ideal affinity tensor from an imperfect input. By taking into account the relationship of the affinities induced by the cluster structures, we are able to significantly improve the affinity input, such as repairing those entries corrupted by gross outliers. More importantly, the recovered ideal affinity tensor also directly indicates the number of clusters and their membership, thus solving the model selection and clustering jointly. To enforce the requisite global consistency in the affinities demanded by the cluster structure, we impose a number of constraints, specifically, among others, the tensor should be low rank and sparse, and it should obey what we call the rank-1 sum constraint. To solve this highly non-smooth and non-convex problem, we exploit the mathematical structures, and express the original problem in an equivalent form amenable for numerical optimization and convergence analysis. To scale to large problem sizes, we also propose an alternative formulation, so that those problems can be efficiently solved via stochastic optimization in an online fashion. We evaluate our algorithm with different applications to demonstrate its superiority, and show it can adapt to a large variety of settings.

  13. A general method for modeling population dynamics and its applications.

    Science.gov (United States)

    Shestopaloff, Yuri K

    2013-12-01

    Studying populations, be it a microbe colony or mankind, is important for understanding how complex systems evolve and exist. Such knowledge also often provides insights into evolution, history and different aspects of human life. By and large, populations' prosperity and decline is about transformation of certain resources into quantity and other characteristics of populations through growth, replication, expansion and acquisition of resources. We introduce a general model of population change, applicable to different types of populations, which interconnects numerous factors influencing population dynamics, such as nutrient influx and nutrient consumption, reproduction period, reproduction rate, etc. It is also possible to take into account specific growth features of individual organisms. We considered two recently discovered distinct growth scenarios: first, when organisms do not change their grown mass regardless of nutrients availability, and the second when organisms can reduce their grown mass by several times in a nutritionally poor environment. We found that nutrient supply and reproduction period are two major factors influencing the shape of population growth curves. There is also a difference in population dynamics between these two groups. Organisms belonging to the second group are significantly more adaptive to reduction of nutrients and far more resistant to extinction. Also, such organisms have substantially more frequent and lesser in amplitude fluctuations of population quantity for the same periodic nutrient supply (compared to the first group). Proposed model allows adequately describing virtually any possible growth scenario, including complex ones with periodic and irregular nutrient supply and other changing parameters, which present approaches cannot do.

  14. Laboratory tests of IEC DER object models for grid applications.

    Energy Technology Data Exchange (ETDEWEB)

    Blevins, John D. (PE Salt River Project, Phoenix, AZ); Menicucci, David F.; Byrd, Thomas, Jr. (,; .); Gonzalez, Sigifredo; Ginn, Jerry W.; Ortiz-Moyet, Juan (Primecore, Inc.)

    2007-02-01

    This report describes a Cooperative Research and Development Agreement (CRADA) between Salt River Project Agricultural Improvement and Power District (SRP) and Sandia National Laboratories to jointly develop advanced methods of controlling distributed energy resources (DERs) that may be located within SRP distribution systems. The controls must provide a standardized interface to allow plug-and-play capability and should allow utilities to take advantage of advanced capabilities of DERs to provide a value beyond offsetting load power. To do this, Sandia and SRP field-tested the IEC 61850-7-420 DER object model (OM) in a grid environment, with the goal of validating whether the model is robust enough to be used in common utility applications. The diesel generator OM tested was successfully used to accomplish basic genset control and monitoring. However, as presently constituted it does not enable plug-and-play functionality. Suggestions are made of aspects of the standard that need further development and testing. These problems are far from insurmountable and do not imply anything fundamentally unsound or unworkable in the standard.

  15. Modelling the application of integrated photonic spectrographs to astronomy

    Science.gov (United States)

    Harris, R. J.; Allington-Smith, J. R.

    2012-09-01

    One of the well-known problems of producing instruments for Extremely Large Telescopes is that their size (and hence cost) scales rapidly with telescope aperture. To try to break this relation alternative new technologies have been proposed, such as the use of the Integrated Photonic Spectrograph (IPS). Due to their diraction limited nature the IPS is claimed to defeat the harsh scaling law applying to conventional instruments. The problem with astronomical applications is that unlike conventional photonics, they are not usually fed by diraction limited sources. This means in order to retain throughput and spatial information the IPS will require multiple Arrayed Waveguide Gratings (AWGs) and a photonic lantern. We investigate the implications of these extra components on the size of the instrument. We also investigate the potential size advantage of using an IPS as opposed to conventional monolithic optics. To do this, we have constructed toy models of IPS and conventional image sliced spectrographs to calculate the relative instrument sizes and their requirements in terms of numbers of detector pixels. Using these models we can quantify the relative size/cost advantage for dierent types of instrument, by varying dierent parameters e.g. multiplex gain and spectral resolution. This is accompanied by an assessment of the uncertainties in these predictions, which may prove crucial for the planning of future instrumentation for highly-multiplexed spectroscopy.

  16. Application of Stochastic Partial Differential Equations to Reservoir Property Modelling

    KAUST Repository

    Potsepaev, R.

    2010-09-06

    Existing algorithms of geostatistics for stochastic modelling of reservoir parameters require a mapping (the \\'uvt-transform\\') into the parametric space and reconstruction of a stratigraphic co-ordinate system. The parametric space can be considered to represent a pre-deformed and pre-faulted depositional environment. Existing approximations of this mapping in many cases cause significant distortions to the correlation distances. In this work we propose a coordinate free approach for modelling stochastic textures through the application of stochastic partial differential equations. By avoiding the construction of a uvt-transform and stratigraphic coordinates, one can generate realizations directly in the physical space in the presence of deformations and faults. In particular the solution of the modified Helmholtz equation driven by Gaussian white noise is a zero mean Gaussian stationary random field with exponential correlation function (in 3-D). This equation can be used to generate realizations in parametric space. In order to sample in physical space we introduce a stochastic elliptic PDE with tensor coefficients, where the tensor is related to correlation anisotropy and its variation is physical space.

  17. Risk management modeling and its application in maritime safety

    Science.gov (United States)

    Qin, Ting-Rong; Chen, Wei-Jiong; Zeng, Xiang-Kun

    2008-12-01

    Quantified risk assessment (QRA) needs mathematicization of risk theory. However, attention has been paid almost exclusively to applications of assessment methods, which has led to neglect of research into fundamental theories, such as the relationships among risk, safety, danger, and so on. In order to solve this problem, as a first step, fundamental theoretical relationships about risk and risk management were analyzed for this paper in the light of mathematics, and then illustrated with some charts. Second, man-machine-environment-management (MMEM) theory was introduced into risk theory to analyze some properties of risk. On the basis of this, a three-dimensional model of risk management was established that includes: a goal dimension; a management dimension; an operation dimension. This goal management operation (GMO) model was explained and then emphasis was laid on the discussion of the risk flowchart (operation dimension), which lays the groundwork for further study of risk management and qualitative and quantitative assessment. Next, the relationship between Formal Safety Assessment (FSA) and Risk Management was researched. This revealed that the FSA method, which the international maritime organization (IMO) is actively spreading, comes from Risk Management theory. Finally, conclusion were made about how to apply this risk management method to concrete fields efficiently and conveniently, as well as areas where further research is required.

  18. Permeability of Two Parachute Fabrics: Measurements, Modeling, and Application

    Science.gov (United States)

    Cruz, Juan R.; O'Farrell, Clara; Hennings, Elsa; Runnells, Paul

    2017-01-01

    Two parachute fabrics, described by Parachute Industry Specifications PIA-C-7020D Type I and PIA-C-44378D Type I, were tested to obtain their permeabilities in air (i.e., flow-through volume of air per area per time) over the range of differential pressures from 0.146 psf (7 Pa) to 25 psf (1197 Pa). Both fabrics met their specification permeabilities at the standard differential pressure of 0.5 inch of water (2.60 psf, 124 Pa). The permeability results were transformed into an effective porosity for use in calculations related to parachutes. Models were created that related the effective porosity to the unit Reynolds number for each of the fabrics. As an application example, these models were used to calculate the total porosities for two geometrically-equivalent subscale Disk-Gap-Band (DGB) parachutes fabricated from each of the two fabrics, and tested at the same operating conditions in a wind tunnel. Using the calculated total porosities and the results of the wind tunnel tests, the drag coefficient of a geometrically-equivalent full-scale DGB operating on Mars was estimated.

  19. Permeability of Two Parachute Fabrics - Measurements, Modeling, and Application

    Science.gov (United States)

    Cruz, Juan R.; O'Farrell, Clara; Hennings, Elsa; Runnells, Paul

    2016-01-01

    Two parachute fabrics, described by Parachute Industry Specifications PIA-C-7020D Type I and PIA-C-44378D Type I, were tested to obtain their permeabilities in air (i.e., flow-through volume of air per area per time) over the range of differential pressures from 0.146 psf (7 Pa) to 25 psf (1197 Pa). Both fabrics met their specification permeabilities at the standard differential pressure of 0.5 inch of water (2.60 psf, 124 Pa). The permeability results were transformed into an effective porosity for use in calculations related to parachutes. Models were created that related the effective porosity to the unit Reynolds number for each of the fabrics. As an application example, these models were used to calculate the total porosities for two geometrically-equivalent subscale Disk-Gap-Band (DGB) parachutes fabricated from each of the two fabrics, and tested at the same operating conditions in a wind tunnel. Using the calculated total porosities and the results of the wind tunnel tests, the drag coefficient of a geometrically-equivalent full-scale DGB operating on Mars was estimated.

  20. 3-dimensional modeling of transcranial magnetic stimulation: Design and application

    Science.gov (United States)

    Salinas, Felipe Santiago

    Over the past three decades, transcranial magnetic stimulation (TMS) has emerged as an effective tool for many research, diagnostic and therapeutic applications in humans. TMS delivers highly localized brain stimulations via non-invasive externally applied magnetic fields. This non-invasive, painless technique provides researchers and clinicians a unique tool capable of stimulating both the central and peripheral nervous systems. However, a complete analysis of the macroscopic electric fields produced by TMS has not yet been performed. In this dissertation, we present a thorough examination of the total electric field induced by TMS in air and a realistic head model with clinically relevant coil poses. In the first chapter, a detailed account of TMS coil wiring geometry was shown to provide significant improvements in the accuracy of primary E-field calculations. Three-dimensional models which accounted for the TMS coil's wire width, height, shape and number of turns clearly improved the fit of calculated-to-measured E-fields near the coil body. Detailed primary E-field models were accurate up to the surface of the coil body (within 0.5% of measured values) whereas simple models were often inadequate (up to 32% different from measured). In the second chapter, we addressed the importance of the secondary E-field created by surface charge accumulation during TMS using the boundary element method (BEM). 3-D models were developed using simple head geometries in order to test the model and compare it with measured values. The effects of tissue geometry, size and conductivity were also investigated. Finally, a realistic head model was used to assess the effect of multiple surfaces on the total E-field. We found that secondary E-fields have the greatest impact at areas in close proximity to each tissue layer. Throughout the head, the secondary E-field magnitudes were predominantly between 25% and 45% of the primary E-fields magnitude. The direction of the secondary E

  1. Modeling survival: application of the Andersen-Gill model to Yellowstone grizzly bears

    Science.gov (United States)

    Johnson, Christopher J.; Boyce, Mark S.; Schwartz, Charles C.; Haroldson, Mark A.

    2004-01-01

     Wildlife ecologists often use the Kaplan-Meier procedure or Cox proportional hazards model to estimate survival rates, distributions, and magnitude of risk factors. The Andersen-Gill formulation (A-G) of the Cox proportional hazards model has seen limited application to mark-resight data but has a number of advantages, including the ability to accommodate left-censored data, time-varying covariates, multiple events, and discontinuous intervals of risks. We introduce the A-G model including structure of data, interpretation of results, and assessment of assumptions. We then apply the model to 22 years of radiotelemetry data for grizzly bears (Ursus arctos) of the Greater Yellowstone Grizzly Bear Recovery Zone in Montana, Idaho, and Wyoming, USA. We used Akaike's Information Criterion (AICc) and multi-model inference to assess a number of potentially useful predictive models relative to explanatory covariates for demography, human disturbance, and habitat. Using the most parsimonious models, we generated risk ratios, hypothetical survival curves, and a map of the spatial distribution of high-risk areas across the recovery zone. Our results were in agreement with past studies of mortality factors for Yellowstone grizzly bears. Holding other covariates constant, mortality was highest for bears that were subjected to repeated management actions and inhabited areas with high road densities outside Yellowstone National Park. Hazard models developed with covariates descriptive of foraging habitats were not the most parsimonious, but they suggested that high-elevation areas offered lower risks of mortality when compared to agricultural areas.

  2. The "Biopsychosocial Model": 40 years of application in Psychiatry.

    Science.gov (United States)

    Papadimitriou, G

    2017-01-01

    ofonset of an illness's manifestation, and they can also protect a vulnerable person from the disease. Stressful experiences modify immunological response and influence treatment compliance. Non adherence to pharmacotherapy,as well as to the psychosocial interventions, may cause defective recovery of psychosocial functioning, recurrence ofthe disorder, as well as insufficient use of health resources and a higher health care cost. The psychoeducation of patients andtheir relatives by the application of the biopsychosocial model plays an important role in psychiatric therapeutics, and it mayalso be used via Internet in the frame of telepsychiatry. Results from neuroimaging studies have shown that the different kinds of human experiences, traumatic or therapeutic, havemeasurable influences on the brain function. Psychotherapy may modify the neuronal connections of the brain in the frame ofits plasticity, as was found by the discovery of synaptogenesis in response to learning and can, thus, be considered not only as astrictly psychological but also as a biopsychosocial form of treatment. Among the disadvantages of the biopsychosocial model have been reported the lack of a concise theoretical frameworkregarding its function and content, that it is complicated, difficulties in its coordination and assignment of responsibilities, aswell as problems with the education on it being multifaceted. The biopsychosocial model has been criticized that it does notconstitute a scientific or philosophical model, it does not provide an answer to the crucial question of how the biological, psychologicaland social variables interact in the disease's expression, that it does not provide guidance on the exact time of itsapplication and, finally, that it allows for a wide range of interventions without providing specific guidelines of a concrete therapeutic scheme. The person-centered diagnosis is based on the biopsychosocial model, connects science with humanism and uses all thepossible ways so

  3. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  4. The development of a sports statistics web application : Sports Analytics and Data Models for a sports data web application

    OpenAIRE

    Alvarsson, Andreas

    2017-01-01

    Sports and technology have always co-operated to bring better and more specific sports statistics. The collection of sports game data as well as the ability to generate valuable sports statistics of it is growing. This thesis investigates the development of a sports statistics application that should be able to collect sports game data, structure the data according to suitable data models and show statistics in a proper way. The application was set to be a web application that was developed u...

  5. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines. Copyright © 2015 by The American Society for Pharmacology and Experimental Therapeutics.

  6. Plant growth modelling and applications: the increasing importance of plant architecture in growth models.

    Science.gov (United States)

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-05-01

    Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional-structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06: This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13-17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic processes have

  7. A GUIDED SWAT MODEL APPLICATION ON SEDIMENT YIELD MODELING IN PANGANI RIVER BASIN: LESSONS LEARNT

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available The overall objective of this paper is to report on the lessons learnt from applying Soil and Water Assessment Tool (SWAT in a well guided sediment yield modelling study. The study area is the upstream of Pangani River Basin (PRB, the Nyumba Ya Mungu (NYM reservoir catchment, located in the North Eastern part of Tanzania. It should be noted that, previous modeling exercises in the region applied SWAT with preassumption that inter-rill or sheet erosion was the dominant erosion type. In contrast, in this study SWAT model application was guided by results of analysis of high temporal resolution of sediment flow data and hydro-meteorological data. The runoff component of the SWAT model was calibrated from six-years (i.e. 1977¿1982 of historical daily streamflow data. The sediment component of the model was calibrated using one-year (1977-1988 daily sediment loads estimated from one hydrological year sampling programme (between March and November, 2005 rating curve. A long-term period over 37 years (i.e. 1969-2005 simulation results of the SWAT model was validated to downstream NYM reservoir sediment accumulation information. The SWAT model captured 56 percent of the variance (CE and underestimated the observed daily sediment loads by 0.9 percent according to Total Mass Control (TMC performance indices during a normal wet hydrological year, i.e., between November 1, 1977 and October 31, 1978, as the calibration period. SWAT model predicted satisfactorily the long-term sediment catchment yield with a relative error of 2.6 percent. Also, the model has identified erosion sources spatially and has replicated some erosion processes as determined in other studies and field observations in the PRB. This result suggests that for catchments where sheet erosion is dominant SWAT model may substitute the sediment-rating curve. However, the SWAT model could not capture the dynamics of sediment load delivery in some seasons to the catchment outlet.

  8. Plant Growth Modelling and Applications: The Increasing Importance of Plant Architecture in Growth Models

    Science.gov (United States)

    Fourcaud, Thierry; Zhang, Xiaopeng; Stokes, Alexia; Lambers, Hans; Körner, Christian

    2008-01-01

    Background Modelling plant growth allows us to test hypotheses and carry out virtual experiments concerning plant growth processes that could otherwise take years in field conditions. The visualization of growth simulations allows us to see directly and vividly the outcome of a given model and provides us with an instructive tool useful for agronomists and foresters, as well as for teaching. Functional–structural (FS) plant growth models are nowadays particularly important for integrating biological processes with environmental conditions in 3-D virtual plants, and provide the basis for more advanced research in plant sciences. Scope In this viewpoint paper, we ask the following questions. Are we modelling the correct processes that drive plant growth, and is growth driven mostly by sink or source activity? In current models, is the importance of soil resources (nutrients, water, temperature and their interaction with meristematic activity) considered adequately? Do classic models account for architectural adjustment as well as integrating the fundamental principles of development? Whilst answering these questions with the available data in the literature, we put forward the opinion that plant architecture and sink activity must be pushed to the centre of plant growth models. In natural conditions, sinks will more often drive growth than source activity, because sink activity is often controlled by finite soil resources or developmental constraints. PMA06 This viewpoint paper also serves as an introduction to this Special Issue devoted to plant growth modelling, which includes new research covering areas stretching from cell growth to biomechanics. All papers were presented at the Second International Symposium on Plant Growth Modeling, Simulation, Visualization and Applications (PMA06), held in Beijing, China, from 13–17 November, 2006. Although a large number of papers are devoted to FS models of agricultural and forest crop species, physiological and genetic

  9. Improved Nuclear Reactor and Shield Mass Model for Space Applications

    Science.gov (United States)

    Robb, Kevin

    2004-01-01

    New technologies are being developed to explore the distant reaches of the solar system. Beyond Mars, solar energy is inadequate to power advanced scientific instruments. One technology that can meet the energy requirements is the space nuclear reactor. The nuclear reactor is used as a heat source for which a heat-to-electricity conversion system is needed. Examples of such conversion systems are the Brayton, Rankine, and Stirling cycles. Since launch cost is proportional to the amount of mass to lift, mass is always a concern in designing spacecraft. Estimations of system masses are an important part in determining the feasibility of a design. I worked under Michael Barrett in the Thermal Energy Conversion Branch of the Power & Electric Propulsion Division. An in-house Closed Cycle Engine Program (CCEP) is used for the design and performance analysis of closed-Brayton-cycle energy conversion systems for space applications. This program also calculates the system mass including the heat source. CCEP uses the subroutine RSMASS, which has been updated to RSMASS-D, to estimate the mass of the reactor. RSMASS was developed in 1986 at Sandia National Laboratories to quickly estimate the mass of multi-megawatt nuclear reactors for space applications. In response to an emphasis for lower power reactors, RSMASS-D was developed in 1997 and is based off of the SP-100 liquid metal cooled reactor. The subroutine calculates the mass of reactor components such as the safety systems, instrumentation and control, radiation shield, structure, reflector, and core. The major improvements in RSMASS-D are that it uses higher fidelity calculations, is easier to use, and automatically optimizes the systems mass. RSMASS-D is accurate within 15% of actual data while RSMASS is only accurate within 50%. My goal this summer was to learn FORTRAN 77 programming language and update the CCEP program with the RSMASS-D model.

  10. Modeling of Photonic Band Gap Crystals and Applications

    Energy Technology Data Exchange (ETDEWEB)

    El-Kady, Ihab Fathy [Iowa State Univ., Ames, IA (United States)

    2002-01-01

    In this work, the authors have undertaken a theoretical approach to the complex problem of modeling the flow of electromagnetic waves in photonic crystals. The focus is to address the feasibility of using the exciting phenomena of photonic gaps (PBG) in actual applications. The authors start by providing analytical derivations of the computational electromagnetic methods used in their work. They also present a detailed explanation of the physics underlying each approach, as well as a comparative study of the strengths and weaknesses of each method. The Plane Wave expansion, Transfer Matrix, and Finite Difference time Domain Methods are addressed. They also introduce a new theoretical approach, the Modal Expansion Method. They then shift the attention to actual applications. They begin with a discussion of 2D photonic crystal wave guides. The structure addressed consists of a 2D hexagonal structure of air cylinders in a layered dielectric background. Comparison with the performance of a conventional guide is made, as well as suggestions for enhancing it. The studies provide an upper theoretical limit on the performance of such guides, as they assumed no crystal imperfections and non-absorbing media. Next, they study 3D metallic PBG materials at near infrared and optical wavelengths. The main objective is to study the importance of absorption in the metal and the suitability of observing photonic band gaps in such structures. They study simple cubic structures where the metallic scatters are either cubes or interconnected metallic rods. Several metals are studied (aluminum, gold, copper, and silver). The effect of topology is addressed and isolated metallic cubes are found to be less lossy than the connected rod structures. The results reveal that the best performance is obtained by choosing metals with a large negative real part of the dielectric function, together with a relatively small imaginary part. Finally, they point out a new direction in photonic crystal

  11. Optimisation of ionic models to fit tissue action potentials: application to 3D atrial modelling.

    Science.gov (United States)

    Al Abed, Amr; Guo, Tianruo; Lovell, Nigel H; Dokos, Socrates

    2013-01-01

    A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN), intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue.

  12. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    Science.gov (United States)

    Lovell, Nigel H.; Dokos, Socrates

    2013-01-01

    A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN), intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue. PMID:23935704

  13. Optimisation of Ionic Models to Fit Tissue Action Potentials: Application to 3D Atrial Modelling

    Directory of Open Access Journals (Sweden)

    Amr Al Abed

    2013-01-01

    Full Text Available A 3D model of atrial electrical activity has been developed with spatially heterogeneous electrophysiological properties. The atrial geometry, reconstructed from the male Visible Human dataset, included gross anatomical features such as the central and peripheral sinoatrial node (SAN, intra-atrial connections, pulmonary veins, inferior and superior vena cava, and the coronary sinus. Membrane potentials of myocytes from spontaneously active or electrically paced in vitro rabbit cardiac tissue preparations were recorded using intracellular glass microelectrodes. Action potentials of central and peripheral SAN, right and left atrial, and pulmonary vein myocytes were each fitted using a generic ionic model having three phenomenological ionic current components: one time-dependent inward, one time-dependent outward, and one leakage current. To bridge the gap between the single-cell ionic models and the gross electrical behaviour of the 3D whole-atrial model, a simplified 2D tissue disc with heterogeneous regions was optimised to arrive at parameters for each cell type under electrotonic load. Parameters were then incorporated into the 3D atrial model, which as a result exhibited a spontaneously active SAN able to rhythmically excite the atria. The tissue-based optimisation of ionic models and the modelling process outlined are generic and applicable to image-based computer reconstruction and simulation of excitable tissue.

  14. RF tunable devices and subsystems methods of modeling, analysis, and applications methods of modeling, analysis, and applications

    CERN Document Server

    Gu, Qizheng

    2015-01-01

    This book serves as a hands-on guide to RF tunable devices, circuits and subsystems. An innovative method of modeling for tunable devices and networks is described, along with a new tuning algorithm, adaptive matching network control approach, and novel filter frequency automatic control loop.  The author provides readers with the necessary background and methods for designing and developing tunable RF networks/circuits and tunable RF font-ends, with an emphasis on applications to cellular communications. ·      Discusses the methods of characterizing, modeling, analyzing, and applying RF tunable devices and subsystems; ·      Explains the necessary methods of utilizing RF tunable devices and subsystems, rather than discussing the RF tunable devices themselves; ·      Presents and applies methods for MEMS tunable capacitors, which can be used for any RF tunable device; ·      Uses analytic methods wherever possible and provides numerous, closed-form solutions; ·      Includ...

  15. Application of molecular modeling to polymer grafted nanostructures

    Science.gov (United States)

    Adiga, Shashishekar P.

    Polymer chains undergo conformational transitions in response to a change in solvent quality of their environment, making them strong candidates to be used in smart nanometer-scale devices. In the present work molecular modeling is used to explore grafted polymer structures with various functionalities. The first part of this research focuses on two examples of selective transport through nanopores modified with polymer brush structures. The first is the investigation of solvent flow through nanopores grafted with linear chains. Molecular dynamics (MD) simulations are used to demonstrate how a stretch-collapse transition in grafted polymer chains can be used to control solvent flow rate through a nanopore in response to environmental stimuli. A continuum fluid dynamics method based on porous layer model for describing flow through the smart nanopore is described and its accuracy is analyzed by comparing with the results from MD simulations. The continuum method is then applied to determine regulation of water permeation in response to pH through a poly(L-glutamic acid) grafted nanoporous membrane. A second example is use of a rod-coil transition in "bottle brush" molecules that are grafted to the inside of a nanopore to size select macromolecules as they diffuse through the functionalized nanopores. These stimuli-responsive nanopores have a variety of potential applications including molecular sorting, smart drug delivery, and ultrafiltration, as well as controlled chemical release. Tethered polymers play an important role in biological structures as well. In the second part of the research, application of atomistic simulations to characterize the effect of phosphorylation on neurofilament structure is presented. Neurofilaments are intermediate filaments that regulate axonal diameter through their long, flexible side arms extending from the central core. Their functionality is imparted by polymer brush like structure that causes steric repulsion between the

  16. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  17. Solving Enterprise Applications Performance Puzzles Queuing Models to the Rescue

    CERN Document Server

    Grinshpan, Leonid

    2012-01-01

    A groundbreaking scientific approach to solving enterprise applications performance problems Enterprise applications are the information backbone of today's corporations, supporting vital business functions such as operational management, supply chain maintenance, customer relationship administration, business intelligence, accounting, procurement logistics, and more. Acceptable performance of enterprise applications is critical for a company's day-to-day operations as well as for its profitability. Unfortunately, troubleshooting poorly performing enterprise applications has traditionally

  18. Advancement of Global-scale River Hydrodynamics Modelling and Its Potential Applications to Earth System Models

    Science.gov (United States)

    Yamazaki, D.

    2015-12-01

    Global river routine models have been developed for representing freshwater discharge from land to ocean in Earth System Models. At the beginning, global river models had simulated river discharge along a prescribed river network map by using a linear-reservoir assumption. Recently, in parallel with advancement of remote sensing and computational powers, many advanced global river models have started to represent floodplain inundation assuming sub-grid floodplain topography. Some of them further pursue physically-appropriate representation of river and floodplain dynamics, and succeeded to utilize "hydrodynamic flow equations" to realistically simulate channel/floodplain and upstream/downstream interactions. State-of-the-art global river hydrodynamic models can well reproduce flood stage (e.g. inundated areas and water levels) in addition to river discharge. Flood stage simulation by global river models can be potentially coupled with land surface processes in Earth System Models. For example, evaporation from inundated water area is not negligible for land-atmosphere interactions in arid areas (such as the Niger River). Surface water level and ground water level are correlated each other in flat topography, and this interaction could dominate wetting and drying of many small lakes in flatland and could also affect biogeochemical processes in these lakes. These land/surface water interactions had not been implemented in Earth System Models but they have potential impact on the global climate and carbon cycle. In the AGU presentation, recent advancements of global river hydrodynamic modelling, including super-high resolution river topography datasets, will be introduces. The potential applications of river and surface water modules within Earth System Models will be also discussed.

  19. Application of wildfire simulation models for risk analysis

    Science.gov (United States)

    Ager, A.; Finney, M.

    2009-04-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of fires and generate burn probability and intensity maps over large areas (10,000 - 2,000,000 ha). The MTT algorithm is parallelized for multi-threaded processing and is imbedded in a number of research and applied fire modeling applications. High performance computers (e.g., 32-way 64 bit SMP) are typically used for MTT simulations, although the algorithm is also implemented in the 32 bit desktop FlamMap3 program (www.fire.org). Extensive testing has shown that this algorithm can replicate large fire boundaries in the heterogeneous landscapes that typify much of the wildlands in the western U.S. In this paper, we describe the application of the MTT algorithm to understand spatial patterns of burn probability (BP), and to analyze wildfire risk to key human and ecological values. The work is focused on a federally-managed 2,000,000 ha landscape in the central interior region of Oregon State, USA. The fire-prone study area encompasses a wide array of topography and fuel types and a number of highly valued resources that are susceptible to fire. We quantitatively defined risk as the product of the probability of a fire and the resulting consequence. Burn probabilities at specific intensity classes were estimated for each 100 x 100 m pixel by simulating 100,000 wildfires under burn conditions that replicated recent severe wildfire events that occurred under conditions where fire suppression was generally ineffective (97th percentile, August weather). We repeated the simulation under milder weather (70th percentile, August weather) to replicate a "wildland fire use scenario" where suppression is minimized to

  20. Identification of PARMA Models and Their Application to the Modeling of River flows

    Science.gov (United States)

    Tesfaye, Y. G.; Meerschaert, M. M.; Anderson, P. L.

    2004-05-01

    The generation of synthetic river flow samples that can reproduce the essential statistical features of historical river flows is essential to the planning, design and operation of water resource systems. Most river flow series are periodically stationary; that is, their mean and covariance functions are periodic with respect to time. We employ a periodic ARMA (PARMA) model. The innovation algorithm can be used to obtain parameter estimates for PARMA models with finite fourth moment as well as infinite fourth moment but finite variance. Anderson and Meerschaert (2003) provide a method for model identification when the time series has finite fourth moment. This article, an extension of the previous work by Anderson and Meerschaert, demonstrates the effectiveness of the technique using simulated data. An application to monthly flow data for the Frazier River in British Columbia is also included to illustrate the use of these methods.

  1. HeMoLab--Hemodynamics Modelling Laboratory: an application for modelling the human cardiovascular system.

    Science.gov (United States)

    Larrabide, I; Blanco, P J; Urquiza, S A; Dari, E A; Vénere, M J; de Souza e Silva, N A; Feijóo, R A

    2012-10-01

    In this work we present HeMoLab (Hemodynamics Modeling Laboratory), a computational environment for modeling the Human Cardiovascular System. Its integrates novel computational tools, running from medical image processing to numerical simulation and visualization. As a simulation tool, it allows to accommodate complex physiological and/or pathophysiological (virtual) scenarios aimed to retrieve detailed information from the numerical computations. Such application makes possible to speed up research in the study and analysis of the cardiovascular system and, to provide a virtual laboratory for medical training and education, and specialized Human Resources development. In order to demonstrate the modeling and simulation capabilities of HeMoLab some cases of use are presented. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Review of Development Survey of Phase Change Material Models in Building Applications

    OpenAIRE

    Hussein J. Akeiber; Mazlan A. Wahid; Hussen, Hasanen M.; Abdulrahman Th. Mohammad

    2014-01-01

    The application of phase change materials (PCMs) in green buildings has been increasing rapidly. PCM applications in green buildings include several development models. This paper briefly surveys the recent research and development activities of PCM technology in building applications. Firstly, a basic description of phase change and their principles is provided; the classification and applications of PCMs are also included. Secondly, PCM models in buildings are reviewed and discussed accordi...

  3. Syntheses of the current model applications for managing water and needs for experimental data and model improvements to enhance these applications

    Science.gov (United States)

    This volume of the Advances in Agricultural Systems Modeling series presents 14 different case studies of model applications to help make the best use of limited water in agriculture. These examples show that models have tremendous potential and value in enhancing site-specific water management for ...

  4. Towards metagenome-scale models for industrial applications - the case of Lactic Acid Bacteria

    NARCIS (Netherlands)

    Teusink, B.; Branco dos Santos, F.; de Vos, W.M.

    2013-01-01

    We review the uses and limitations of modelling approaches that are in use in the field of Lactic Acid Bacteria (LAB). We describe recent developments in model construction and computational methods, starting from application of such models to monocultures. However, since most applications in food

  5. Political economy models and agricultural policy formation : empirical applicability and relevance for the CAP

    NARCIS (Netherlands)

    Zee, van der F.A.

    1997-01-01

    This study explores the relevance and applicability of political economy models for the explanation of agricultural policies. Part I (chapters 4-7) takes a general perspective and evaluates the empirical applicability of voting models and interest group models to agricultural policy

  6. CRISPR-Cas9 technology: applications and human disease modelling.

    Science.gov (United States)

    Torres-Ruiz, Raul; Rodriguez-Perales, Sandra

    2017-01-01

    Genome engineering is a powerful tool for a wide range of applications in biomedical research and medicine. The development of the clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 system has revolutionized the field of gene editing, thus facilitating efficient genome editing through the creation of targeted double-strand breaks of almost any organism and cell type. In addition, CRISPR-Cas9 technology has been used successfully for many other purposes, including regulation of endogenous gene expression, epigenome editing, live-cell labelling of chromosomal loci, edition of single-stranded RNA and high-throughput gene screening. The implementation of the CRISPR-Cas9 system has increased the number of available technological alternatives for studying gene function, thus enabling generation of CRISPR-based disease models. Although many mechanistic questions remain to be answered and several challenges have yet to be addressed, the use of CRISPR-Cas9-based genome engineering technologies will increase our knowledge of disease processes and their treatment in the near future. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. MODELLING OF SAFETY-RELATED COMMUNICATIONS FOR RAILWAY APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. Franekowa

    2016-10-01

    Full Text Available Purpose. The authors according to requirements of the standard valid for safety-related communication between interlocking systems (EN 50159 analyze the situation of message undetected which is transmitted across BSC channel. The main part is orientated to description of model with CRC code used for messages assurance across noise communication channel. Methodology. For mathematical description of encoding, decoding, error detection and error correction is used algebra of polynomials. Findings. The determination of the intensity of dangerous failure caused by the electromagnetic interference was calculated. Originality. To obtain information on the probability of undetected error in transmission code and safety code and on the intensity of dangerous failure from the motel it was created program with graphical interface. To calculate the probability of undetected error for any block code (n, k was created a supporting program that displays the probabilityof undetected error for selected interval of error bit rate. Practical value.From the measured and calculated values obtained by the simulation one can see that with increasing error bit rate is increasing also intensity of dangerous failures. Transmission code did not detect all corrupted messages therefore it is necessary to use safety code independent on transmission code in safety-related applications. CRC is not able to detect errors if all bits are logical 0.

  8. Penetrating keratoplasty in the cat. A clinically applicable model.

    Science.gov (United States)

    Bahn, C F; Meyer, R F; MacCallum, D K; Lillie, J H; Lovett, E J; Sugar, A; Martonyi, C L

    1982-06-01

    A series of 28 consecutive penetrating keratoplasties were performed on adult cats. Donor corneas (n = 14) were maintained in culture medium for 14--24 hours prior to transplantation. Rotational autografts (n = 7) were used to control for cell loss caused by culture maintenance as well as for the effects of surgery. Additional homografts (n = 7) were transplanted following removal of the corneal endothelium to study the extent of host corneal endothelial cell regeneration. Pre- and post-operative endothelial cell counts of the homografts made from specular micrographs demonstrated an average cell loss of 30% one month following surgery. A similar 30% average cell loss was present in the rotational autografts. Clinically, both homografts and autografts remained clear and were near normal in thickness. Homografts lacking endothelium exhibited persistent, severe edema that correlated with the inability of the host corneal endothelium to resurface the graft. Clinical and morphologic evidence of mild homograft rejection as observed in 15% of the animals that received normal homografts. Corneal endothelial cell loss following penetrating keratoplasty in the cat approximates that observed following the same procedure in the human. Additionally, regenerative capacity of the corneal endothelium in the cat, like that of the human, is limited. These features suggest that this cooperative, hardy animal is an excellent model in which to study many aspects of corneal transplantation that have direct application to the treatment of human corneal disease.

  9. Application of Total Productivity Model within Croatia Airlines

    Directory of Open Access Journals (Sweden)

    Željko Radačić

    2005-09-01

    Full Text Available By defining and selecting adequate factors of the total productivitymodel and by assigning specific relevance of each factor,the initial preconditions for the analysis and monitoring ofthe model application efficiency within the Croatia Airlinesbusiness policy have been established. Since the majority of theanalyzed factors have realized a more intensive growth thanplanned, the business year 2004 can be assessed as the mostsuccessful one in the Croatia Airlines history. Consequently,the difference related to the productivity indicators of the Associationof European Airlines has been reduced, particularly theaircraft productivity with remnant of 5 to 10 percent, and theproductivity of the employees with a remnant of 15 to 20 percent,and the productivity of fuel expressed as quantity at AEAlevel, and expressed as value below that level. Finally, althoughthere is no expressed correlation between the quantitative productivityindicators and business profitability, the highest realizednet profit since the foundation of Croatia Airlines fullysupplements the solid level of the comparison indicators, confirmingits complete readiness and maturity to join the Star Alliance.

  10. Models to Study NK Cell Biology and Possible Clinical Application.

    Science.gov (United States)

    Zamora, Anthony E; Grossenbacher, Steven K; Aguilar, Ethan G; Murphy, William J

    2015-08-03

    Natural killer (NK) cells are large granular lymphocytes of the innate immune system, responsible for direct targeting and killing of both virally infected and transformed cells. NK cells rapidly recognize and respond to abnormal cells in the absence of prior sensitization due to their wide array of germline-encoded inhibitory and activating receptors, which differs from the receptor diversity found in B and T lymphocytes that is due to the use of recombination-activation gene (RAG) enzymes. Although NK cells have traditionally been described as natural killers that provide a first line of defense prior to the induction of adaptive immunity, a more complex view of NK cells is beginning to emerge, indicating they may also function in various immunoregulatory roles and have the capacity to shape adaptive immune responses. With the growing appreciation for the diverse functions of NK cells, and recent technological advancements that allow for a more in-depth understanding of NK cell biology, we can now begin to explore new ways to manipulate NK cells to increase their clinical utility. In this overview unit, we introduce the reader to various aspects of NK cell biology by reviewing topics ranging from NK cell diversity and function, mouse models, and the roles of NK cells in health and disease, to potential clinical applications. © 2015 by John Wiley & Sons, Inc. Copyright © 2015 John Wiley & Sons, Inc.

  11. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  12. Modelling and applications in mathematics education the 14th ICMI study

    CERN Document Server

    Galbraith, Peter L; Niss, Mogens

    2007-01-01

    The book aims at showing the state-of-the-art in the field of modeling and applications in mathematics education. This is the first volume to do this. The book deals with the question of how key competencies of applications and modeling at the heart of mathematical literacy may be developed; with the roles that applications and modeling may play in mathematics teaching, making mathematics more relevant for students.

  13. Modeling of surface roughness: application to physical properties of paper

    Science.gov (United States)

    Bloch, Jean-Francis; Butel, Marc

    2000-09-01

    Papermaking process consists in a succession of unit operations having for main objective the expression of water out of the wet paper pad. The three main stages are successively, the forming section, the press section and finally the drying section. Furthermore, another operation (calendering) may be used to improve the surface smoothness. Forming, pressing and drying are not on the scope of this paper, but the influence of formation and calendering on surface roughness is analyzed. The main objective is to characterize the materials and specially its superficial structure. The proposed model is described in order to analyze this topographical aspect. Some experimental results are presented in order to illustrate the interest of this method to better understand physical properties. This work is therefore dedicated to the description of the proposed model: the studied surface is measured at a microscopic scale using for example, a classical stylus profilometry method. Then the obtained surface is transformed using a conformal mapping that retains the surface orientations. Due to the anisotropy of the fiber distribution in the plane of the sheet, the resulting surface is often not isotropic. Hence, the micro facets that identify the interfaces between pores and solid (fibers in the studied case) at the micro level are transformed into a macroscopic equivalent structure. Furthermore, an ellipsoid may be fit to the experimental data in order to obtain a simple model. The ellipticities are proved to be linked for paper to both fiber orientation (through other optical methods) and roughness. These parameters (ellipticities) are shown to be very significant for different end-use properties. Indeed, they shown to be correlated to printing or optical properties, such as gloss for example. We present in a first part the method to obtain a macroscopic description from physical microscopic measurements. Then measurements carried on different paper samples, using a classical

  14. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  15. Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators

    National Research Council Canada - National Science Library

    Ling, Hao

    2000-01-01

    This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...

  16. Annual Report on Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators

    National Research Council Canada - National Science Library

    Ling, Hao

    1998-01-01

    This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...

  17. Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators

    National Research Council Canada - National Science Library

    Ling, Hao

    1999-01-01

    This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...

  18. Remote sensing and GIS applications for modeling species distributions

    Science.gov (United States)

    Harris, Grant

    Habitat loss is the leading cause of species endangerment. It fragments what remains (most harmful for habitat specialists) and isolates populations (applicable to all species). The fragments, parks and other protected areas where species remain are often too small for the long-term persistence of many species. Although these effects are more pronounced in tropical forests, where most species live, the problem is so widespread that it manifests itself across suites of ecosystems and taxa. Mitigating the problems caused by habitat and population fragmentation requires more information. Specifically, we must determine which species are most extinction prone, find ways to cheaply and quickly determine priority areas for conservation, quantify the minimum areas required for species persistence, and identify the key variables needed for species presence. Here, I analyze each of these four key points, using a spectrum of species, and a variety of remote sensing and GIS techniques. For habitat specialists, exemplified by tropical forest birds, I quantify habitat loss directly. It's simply a matter of measuring the remaining forest. To model habitat generalists, such as African elephants, I incorporate habitat and other variables (water, people, greenness) that dictate their presence. For birds, I find that habitat loss affects all forest endemic species equally. Species not threatened have large remaining ranges and high abundances in their ranges. My methods also refine conservation priorities in biological hotspots. The key lies in finding where species live now, and broad-scale natural history information plus coarse-scale imagery suits this purpose. Coarse imagery is also sufficient to understand the minimum range size at which birds become threatened. Be it habitat loss directly or induced by climate change, bird ranges must be over 20,000 km2 in lowland species, and 10,000 km2 for montane birds to avoid threat. For elephants, it is water and people that predict

  19. Growth Mixture Modeling: Application to Reading Achievement Data from a Large-Scale Assessment

    Science.gov (United States)

    Bilir, Mustafa Kuzey; Binici, Salih; Kamata, Akihito

    2008-01-01

    The popularity of growth modeling has increased in psychological and cognitive development research as a means to investigate patterns of changes and differences between observation units over time. Random coefficient modeling, such as multilevel modeling and latent growth curve modeling as a special application of structural equation modeling are…

  20. Application of Statistical Model in Wastewater Treatment Process Modeling Using Data Analysis

    Directory of Open Access Journals (Sweden)

    Alireza Raygan Shirazinezhad

    2015-06-01

    Full Text Available Background: Wastewater treatment includes very complex and interrelated physical, chemical and biological processes which using data analysis techniques can be rigorously modeled by a non-complex mathematical calculation models. Materials and Methods: In this study, data on wastewater treatment processes from water and wastewater company of Kohgiluyeh and Boyer Ahmad were used. A total of 3306 data for COD, TSS, PH and turbidity were collected, then analyzed by SPSS-16 software (descriptive statistics and data analysis IBM SPSS Modeler 14.2, through 9 algorithm. Results: According to the results on logistic regression algorithms, neural networks, Bayesian networks, discriminant analysis, decision tree C5, tree C & R, CHAID, QUEST and SVM had accuracy precision of 90.16, 94.17, 81.37, 70.48, 97.89, 96.56, 96.46, 96.84 and 88.92, respectively. Discussion and conclusion: The C5 algorithm as the best and most applicable algorithms for modeling of wastewater treatment processes were chosen carefully with accuracy of 97.899 and the most influential variables in this model were PH, COD, TSS and turbidity.

  1. Low dimensional semiconductor structures. Characterization, modeling and applications

    Energy Technology Data Exchange (ETDEWEB)

    Uenlue, Hilmi [Istanbul Technical Univ. (Turkey). Dept. of Physics Engineering; Horing, Norman J.M. (eds.) [Stevens Institute of Technology, Hoboken, NJ (United States). Dept. of Physics and Engineering Physics

    2013-09-01

    Gives a state of the art report of important topics in nanoscience. Includes a broad spectrum of areas developing rapidly in nanostructures science and technology. Delivers a tutorial- and review-like presentation. Starting with the first transistor in 1949, the world has experienced a technological revolution which has permeated most aspects of modern life, particularly over the last generation. Yet another such revolution looms up before us with the newly developed capability to control matter on the nanometer scale. A truly extraordinary research effort, by scientists, engineers, technologists of all disciplines, in nations large and small throughout the world, is directed and vigorously pressed to develop a full understanding of the properties of matter at the nanoscale and its possible applications, to bring to fruition the promise of nanostructures to introduce a new generation of electronic and optical devices. The physics of low dimensional semiconductor structures, including heterostructures, superlattices, quantum wells, wires and dots is reviewed and their modeling is discussed in detail. The truly exceptional material, Graphene, is reviewed; its functionalization and Van der Waals interactions are included here. Recent research on optical studies of quantum dots and on the physical properties of one-dimensional quantum wires is also reported. Chapters on fabrication of nanowire - based nanogap devices by the dielectrophoretic assembly approach. The broad spectrum of research reported here incorporates chapters on nanoengineering and nanophysics. In its presentation of tutorial chapters as well as advanced research on nanostructures, this book is ideally suited to meet the needs of newcomers to the field as well as experienced researchers interested in viewing colleagues' recent advances.

  2. Peripheral arterial disease: application of the chronic care model.

    Science.gov (United States)

    Lovell, Marge; Myers, Kathryn; Forbes, Thomas L; Dresser, George; Weiss, Ed

    2011-12-01

    Management of chronic diseases is one of the greatest challenges facing health care professionals globally. With the aging population increasing worldwide, the number of patients afflicted with chronic diseases will increase. Peripheral Arterial Disease (PAD) is a common, chronic atherosclerotic vascular disease that is associated with a high risk of stroke, myocardial infarction and cardiovascular death. The objective of this study was to determine if a multidisciplinary Vascular Risk Management Clinic (VRMC) would improve risk factor management and health outcomes for patients with PAD with poorly-controlled risk factors. A multidisciplinary VRMC was established utilizing a novel application of the Chronic Care Model to meet the needs of PAD patients. Interventions included optimization of medical therapy, investigations for undiagnosed atherosclerosis in other vascular distributions, smoking cessation therapy, dietary assessment and counseling, and active involvement of patients in evaluating progress towards their risk factor target goals. Assessment of risk factor control was done at each clinic visit and included measures of symptom severity, blood pressure, fasting blood sugar (FBS), lipid profile, body mass index (BMI), and smoking status. Analysis of risk factors was performed for the first 103 patients followed in the clinic. Average follow-up time was 528 days, and statistically significant improvements were seen in blood pressure, LDL, HDL, total cholesterol (TC), and TC/HDL ratio, while BMI, FBS, and triglycerides remained stable. Participation in a specialized vascular risk management clinic resulted in significant improvement in risk factors for disease progression compared to baseline status. Copyright © 2011 Society for Vascular Nursing, Inc. Published by Mosby, Inc. All rights reserved.

  3. Modeling calcium sulfate chemistries with applications to Mars

    Science.gov (United States)

    Marion, G. M.; Catling, D. C.; Kargel, J. S.; Crowley, J. K.

    2016-11-01

    On Mars, evidence indicates widespread calcium sulfate minerals. Gypsum (CaSO4ṡ2H2O) seems to be the dominant calcium sulfate mineral in the north polar region of Mars. On the other hand, anhydrite (CaSO4) and bassanite (CaSO4ṡ0.5H2O) appear to be more common in large sedimentary deposits in the lower latitudes. The tropics are generally warmer and drier, and at least locally show evidence of acidic environments in the past. FREZCHEM is a thermodynamic modeling tool used for assessment of equilibrium involving high salinity solutions and salts, designed especially for low temperatures below 298 K (with one version adapted for temperatures up to 373 K), and we have used it to investigate many Earth, Mars, and other planetary science problems. Gypsum and anhydrite were included in earlier versions of FREZCHEM and our model Mars applications, but bassanite (the CaSO4 hemihydrate) has not previously been included. The objectives of this work are to (1) add bassanite to the FREZCHEM model, (2) examine the environments in which thermodynamic equilibrium precipitation of calcium sulfate minerals would be favored on Mars, and (3) use FREZCHEM to model situations where metastable equilibrium might be favored and promote the formation or persistence of one of these phases over the others in violation of an idealized equilibrium state. We added a bassanite equation based on high temperatures (343-373 K). A Mars simulation was based on a previously published Nasbnd Casbnd Mgsbnd Clsbnd SO4 system over the temperature range of 273 to 373 K. With declining temperatures, the first solid phase under equilibrium precipitation is anhydrite at 373 K, then gypsum forms at 319 K (46 °C), and epsomite (MgSO4ṡ7H2O) at 277 K. This sequence could reflect, for example, the precipitation sequence in a saturated solution that is slowly cooled in a deep, warm aquifer. Because FREZCHEM is based on thermodynamic equilibrium, a crude approach to problems involving metastable equilibria is

  4. The NASA Lightning Nitrogen Oxides Model (LNOM): Application to Air Quality Modeling

    Science.gov (United States)

    Koshak, William; Peterson, Harold; Khan, Maudood; Biazar, Arastoo; Wang, Lihua

    2011-01-01

    Recent improvements to the NASA Marshall Space Flight Center Lightning Nitrogen Oxides Model (LNOM) and its application to the Community Multiscale Air Quality (CMAQ) modeling system are discussed. The LNOM analyzes Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark)(NLDN) data to estimate the raw (i.e., unmixed and otherwise environmentally unmodified) vertical profile of lightning NO(x) (= NO + NO2). The latest LNOM estimates of lightning channel length distributions, lightning 1-m segment altitude distributions, and the vertical profile of lightning NO(x) are presented. The primary improvement to the LNOM is the inclusion of non-return stroke lightning NOx production due to: (1) hot core stepped and dart leaders, (2) stepped leader corona sheath, K-changes, continuing currents, and M-components. The impact of including LNOM-estimates of lightning NO(x) for an August 2006 run of CMAQ is discussed.

  5. Modelling nitrogen dynamics and distributions in the River Tweed, Scotland: an application of the INCA model

    Directory of Open Access Journals (Sweden)

    H. P. Jarvie

    2002-01-01

    Full Text Available The INCA (Integrated Nitrogen in Catchments model was applied to the River Tweed in the Scottish Borders, a large-scale (4400km2, spatially heterogeneous catchment, draining a wide range of agricultural land-use types, and which contributes approximately 20% of UK river flows to the North Sea. The model was calibrated for the first four years' data record (1994 to 1997 and tested over the following three years (1998 to 2000. The model calibration and testing periods incorporated a high degree of variability in climatic conditions and river flows within the Tweed catchment. The ability of the INCA model to reproduce broad-scale spatial patterns and seasonal dynamics in river flows and nitrate concentrations suggests that the processes controlling first order variability in river water nitrate concentrations have been represented successfully within the model. The tendency of the model to overestimate summer/early autumn baseflow nitrate concentrations during dry years may be linked to the operation of aquatic plant uptake effects. It is, therefore, suggested that consideration be given to incorporating a spatially and temporally variable in-stream plant uptake term for the application of INCA to lowland eutrophic rivers. Scenarios to examine possible impacts of environmental change on nitrate concentrations on the Tweed are examined. These include the effects of (i implementing different recommendations for fertiliser use and land use change under the Nitrate Sensitive Areas (NSA Scheme and the Scottish Code of Good Agricultural Practice, (ii worst case scenario changes linked to a dramatic reduction in livestock numbers as a result of a crisis in UK livestock farming and (iii changes in atmospheric nitrogen deposition. Keywords: Nitrate, nitrogen, modelling, Tweed, INCA

  6. Addressing challenges in obtaining high coverage when model checking android applications

    CSIR Research Space (South Africa)

    Botha, Heila-Marie

    2017-07-01

    Full Text Available use model checking to systematically explore application paths while reducing the analysis size using state matching and backtracking. In particular, we extend the Java PathFinder (JPF) model checking environment for Android. We describe...

  7. Learning Objects, Type II Applications, and Embedded Pedagogical Models

    Science.gov (United States)

    Gadanidis, George; Schindler, Karen

    2006-01-01

    In this paper we consider the extent to which learning objects that focus on higher level thinking might be seen as Type II applications, as defined by Maddux, Johnson, and Willis (2001). We conclude that learning objects are at best hybrid applications, with some Type I and some Type II characteristics. We also consider whether the educational…

  8. Generic dialogue modeling for multi-application dialogue systems

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Zwiers, Jakob; Nijholt, Antinus; Poel, Mannes; Renals, S.; Bengio, S.

    2006-01-01

    We present a novel approach to developing interfaces for multi-application dialogue systems. The targeted interfaces allow transparent switching between a large number of applications within one system. The approach, based on the Rapid Dialogue Prototyping Methodology (RDPM) and the Vector Space

  9. Applicability of land use models for the Houston area test site

    Science.gov (United States)

    Petersburg, R. K.; Bradford, L. H.

    1973-01-01

    Descriptions of land use models are presented which were considered for their applicability to the Houston Area Test Site. These models are representative both of the prevailing theories of land use dynamics and of basic approaches to simulation. The models considered are: a model of metropolis, land use simulation model, emperic land use forecasting model, a probabilistic model for residential growth, and the regional environmental management allocation process. Sources of environmental/resource information are listed.

  10. The application of single particle hydrodynamics in continuum models of multiphase flow

    Science.gov (United States)

    Decker, Rand

    1988-01-01

    A review of the application of single particle hydrodynamics in models for the exchange of interphase momentum in continuum models of multiphase flow is presented. Considered are the equations of motion for a laminar, mechanical two phase flow. Inherent to this theory is a model for the interphase exchange of momentum due to drag between the dispersed particulate and continuous fluid phases. In addition, applications of two phase flow theory to de-mixing flows require the modeling of interphase momentum exchange due to lift forces. The applications of single particle analysis in deriving models for drag and lift are examined.

  11. Usability evaluation model for mobile e-book applications

    Science.gov (United States)

    Matraf, Munya Saleh Ba; Hussain, Azham

    2017-10-01

    Evaluation for mobile e-book applications are limited and did not address all the important usability measurements. Hence, this study aimed to identify the characteristics that affect user satisfaction on the usability of mobile e-book applications. Five characteristics that have a significant effect on the user satisfaction of mobile e-book applications have been identified namely readability, effectiveness, accessibility, efficiency, and navigation. A usability evaluation was conducted on three mobile e-book applications namely Adobe Acrobat Reader, Ebook Reader, and Amazon Kindle. 30 students from Universiti Utara Malaysia evaluated the mobile e-book applications and their satisfaction was measured using questionnaire. The outcomes discovered that the five characteristics have a significant positive relationship with user satisfaction. This provides insights into the main characteristics that increase user satisfaction.

  12. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  13. Conceptual model of an application and its use for application documentation

    Directory of Open Access Journals (Sweden)

    Martin Vonka

    2015-04-01

    Full Text Available Following article proposes methodology for conceptual design of a software application. This form of design is suitable for dynamic development environment and agile principles of software development. Article discus the required scope and style used for description of the application. Unification of a documentation significantly reduces the time required for communication within the development team. Some part of the documentation are obtained using the method of reverse engineering, for example by analysis of the application structure or its source code.

  14. Physically unclonable functions (PUFs) applications, models, and future directions

    CERN Document Server

    Wachsmann, Christian

    2014-01-01

    Today, embedded systems are used in many security-critical applications, from access control, electronic tickets, sensors, and smart devices (e.g., wearables) to automotive applications and critical infrastructures. These systems are increasingly used to produce and process both security-critical and privacy-sensitive data, which bear many security and privacy risks. Establishing trust in the underlying devices and making them resistant to software and hardware attacks is a fundamental requirement in many applications and a challenging, yet unsolved, task. Solutions solely based on software ca

  15. An overview of topic modeling and its current applications in bioinformatics.

    Science.gov (United States)

    Liu, Lin; Tang, Lin; Dong, Wen; Yao, Shaowen; Zhou, Wei

    2016-01-01

    With the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics. This paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications. Topic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.

  16. The social networking application success model : An empirical study of Facebook and Twitter

    NARCIS (Netherlands)

    Ou, Carol; Davison, R.M.; Huang, Q.

    2016-01-01

    Social networking applications (SNAs) are among the fastest growing web applications of recent years. In this paper, we propose a causal model to assess the success of SNAs, grounded on DeLone and McLean’s updated information systems (IS) success model. In addition to their original three dimensions

  17. Soft robots for healthcare applications design, modeling, and control

    CERN Document Server

    Xie, Shane; Meng, Wei

    2017-01-01

    This book presents novel applications of mechatronics to provide better clinical rehabilitation services and new insights into emerging technologies utilized in soft robots for healthcare, and is essential reading for researchers and students working in these and related fields.

  18. Microwave-assisted rock breaking modelling and application

    CSIR Research Space (South Africa)

    Monchusi, B

    2012-10-01

    Full Text Available As part of the ongoing development of novel mining methods, the CSIR has developed alternative methods to break rocks. In this case, we show the application of microwave energy to break narrow tabular ore bodies....

  19. Performance prediction model for distributed applications on multicore clusters

    CSIR Research Space (South Africa)

    Khanyile, NP

    2012-07-01

    Full Text Available Distributed processing offers a way of successfully dealing with computationally demanding applications such as scientific problems. Over the years, researchers have investigated ways to predict the performance of parallel algorithms. Amdahl’s law...

  20. Multimedia Teleservices Modelled with the OSI Application Layer Structure

    NARCIS (Netherlands)

    van Rijssen, Erwin; Widya, I.A.; Michiels, E.F.; Hutchison, D.; Christiansen, H.; Coulson, G.; Danthine, A.A.S.

    This paper looks into the communications capabilities that are required by distributed multimedia applications to achieve relation preserving information exchange. These capabilities are derived by analyzing the notion of information exchange and are embodied in communications functionalities. To

  1. Bilayer Graphene Application on NO2 Sensor Modelling

    OpenAIRE

    Elnaz Akbari; Yusof, R.; M. T. Ahmadi; Enzevaee, A.; M. J. Kiani; H. Karimi; Rahmani, M.

    2014-01-01

    Graphene is one of the carbon allotropes which is a single atom thin layer with sp2 hybridized and two-dimensional (2D) honeycomb structure of carbon. As an outstanding material exhibiting unique mechanical, electrical, and chemical characteristics including high strength, high conductivity, and high surface area, graphene has earned a remarkable position in today’s experimental and theoretical studies as well as industrial applications. One such application incorporates the idea of using gra...

  2. Modeling and management of usage-aware distributed datasets for global Smart City Application Ecosystems

    OpenAIRE

    Johannes M. Schleicher; Michael Vögler; Christian Inzinger; Schahram Dustdar

    2017-01-01

    The ever-growing amount of data produced by and in today’s smart cities offers significant potential for novel applications created by city stakeholders as well as third parties. Current smart city application models mostly assume that data is exclusively managed by and bound to its original application and location. We argue that smart city data must not be constrained to such data silos so that future smart city applications can seamlessly access and integrate data from multiple sources acr...

  3. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  4. Application of black-box models to HVAC systems for fault detection

    NARCIS (Netherlands)

    Peitsman, H.C.; Bakker, V.E.

    1996-01-01

    This paper describes the application of black-box models for fault detection and diagnosis (FDD) in heating, ventilat-ing, and air-conditioning (HVAC) systems. In this study, mul-tiple-input/single-output (MISO) ARX models and artificial neural network (ANN) models are used. The ARX models are

  5. Mediation models and their application in intercultural mediation

    Directory of Open Access Journals (Sweden)

    Carlos Giménez Romero

    2016-10-01

    Full Text Available The article comments on three models of mediation always from the perspective of intercultural mediation: that of the Harvard School of Negotiation, the transformative model of Bush and Folger and the circular narrative model of Sara Coob — not without also taking into account the ideas of other models put forward by Lederach, Galtung,  Fitzduff, etc. But together with others authors and mediation practitioners he considers that all models contribute valuable elements and that, whichever the preference, it is always convenient to include useful elements in the other models when acting in specific situations.

  6. Thermodynamic Model Formulations for Inhomogeneous Solids with Application to Non-isothermal Phase Field Modelling

    Science.gov (United States)

    Gladkov, Svyatoslav; Kochmann, Julian; Reese, Stefanie; Hütter, Markus; Svendsen, Bob

    2016-04-01

    The purpose of the current work is the comparison of thermodynamic model formulations for chemically and structurally inhomogeneous solids at finite deformation based on "standard" non-equilibrium thermodynamics [SNET: e. g. S. de Groot and P. Mazur, Non-equilibrium Thermodynamics, North Holland, 1962] and the general equation for non-equilibrium reversible-irreversible coupling (GENERIC) [H. C. Öttinger, Beyond Equilibrium Thermodynamics, Wiley Interscience, 2005]. In the process, non-isothermal generalizations of standard isothermal conservative [e. g. J. W. Cahn and J. E. Hilliard, Free energy of a non-uniform system. I. Interfacial energy. J. Chem. Phys. 28 (1958), 258-267] and non-conservative [e. g. S. M. Allen and J. W. Cahn, A macroscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall. 27 (1979), 1085-1095; A. G. Khachaturyan, Theory of Structural Transformations in Solids, Wiley, New York, 1983] diffuse interface or "phase-field" models [e. g. P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Modern Phys. 49 (1977), 435-479; N. Provatas and K. Elder, Phase Field Methods in Material Science and Engineering, Wiley-VCH, 2010.] for solids are obtained. The current treatment is consistent with, and includes, previous works [e. g. O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for the kinetics of phase transitions, Phys. D 43 (1990), 44-62; O. Penrose and P. C. Fife, On the relation between the standard phase-field model and a "thermodynamically consistent" phase-field model. Phys. D 69 (1993), 107-113] on non-isothermal systems as a special case. In the context of no-flux boundary conditions, the SNET- and GENERIC-based approaches are shown to be completely consistent with each other and result in equivalent temperature evolution relations.

  7. Improving Stochastic Modelling of Daily Rainfall Using the ENSO Index: Model Development and Application in Chile

    Directory of Open Access Journals (Sweden)

    Diego Urdiales

    2018-02-01

    Full Text Available Stochastic weather simulation, or weather generators (WGs, have gained a wide acceptance and been used for a variety of purposes, including climate change studies and the evaluation of climate variability and uncertainty effects. The two major challenges in WGs are improving the estimation of interannual variability and reducing overdispersion in the synthetic series of simulated weather. The objective of this work is to develop a WG model of daily rainfall, incorporating a covariable that accounts for interannual variability, and apply it in three climate regions (arid, Mediterranean, and temperate of Chile. Precipitation occurrence was modeled using a two-stage, first-order Markov chain, whose parameters are fitted with a generalized lineal model (GLM using a logistic function. This function considers monthly values of the observed Sea Surface Temperature Anomalies of the Region 3.4 of El Niño-Southern Oscillation (ENSO index as a covariable. Precipitation intensity was simulated with a mixed exponential distribution, fitted using a maximum likelihood approach. The stochastic simulation shows that the application of the approach to Mediterranean and arid climates largely eliminates the overdispersion problem, resulting in a much improved interannual variability in the simulated values.

  8. Airline Sustainability Modeling: A New Framework with Application of Bayesian Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    2016-11-01

    Full Text Available There are many factors which could influence the sustainability of airlines. The main purpose of this study is to introduce a framework for a financial sustainability index and model it based on structural equation modeling (SEM with maximum likelihood and Bayesian predictors. The introduced framework includes economic performance, operational performance, cost performance, and financial performance. Based on both Bayesian SEM (Bayesian-SEM and Classical SEM (Classical-SEM, it was found that economic performance with both operational performance and cost performance are significantly related to the financial performance index. The four mathematical indices employed are root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error to compare the efficiency of Bayesian-SEM and Classical-SEM in predicting the airline financial performance. The outputs confirmed that the framework with Bayesian prediction delivered a good fit with the data, although the framework predicted with a Classical-SEM approach did not prepare a well-fitting model. The reasons for this discrepancy between Classical and Bayesian predictions, as well as the potential advantages and caveats with the application of Bayesian approach in airline sustainability studies, are debated.

  9. Application of Multilevel Models to Morphometric Data. Part 1. Linear Models and Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    O. Tsybrovskyy

    2003-01-01

    Full Text Available Morphometric data usually have a hierarchical structure (i.e., cells are nested within patients, which should be taken into consideration in the analysis. In the recent years, special methods of handling hierarchical data, called multilevel models (MM, as well as corresponding software have received considerable development. However, there has been no application of these methods to morphometric data yet. In this paper we report our first experience of analyzing karyometric data by means of MLwiN – a dedicated program for multilevel modeling. Our data were obtained from 34 follicular adenomas and 44 follicular carcinomas of the thyroid. We show examples of fitting and interpreting MM of different complexity, and draw a number of interesting conclusions about the differences in nuclear morphology between follicular thyroid adenomas and carcinomas. We also demonstrate substantial advantages of multilevel models over conventional, single‐level statistics, which have been adopted previously to analyze karyometric data. In addition, some theoretical issues related to MM as well as major statistical software for MM are briefly reviewed.

  10. Modeling and Application of Customer Lifetime Value in Online Retail

    Directory of Open Access Journals (Sweden)

    Pavel Jasek

    2018-01-01

    Full Text Available This article provides an empirical statistical analysis and discussion of the predictive abilities of selected customer lifetime value (CLV models that could be used in online shopping within e-commerce business settings. The comparison of CLV predictive abilities, using selected evaluation metrics, is made on selected CLV models: Extended Pareto/NBD model (EP/NBD, Markov chain model and Status Quo model. The article uses six online store datasets with annual revenues in the order of tens of millions of euros for the comparison. The EP/NBD model has outperformed other selected models in a majority of evaluation metrics and can be considered good and stable for non-contractual relations in online shopping. The implications for the deployment of selected CLV models in practice, as well as suggestions for future research, are also discussed.

  11. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  12. Fast All-Sky Radiation Model for Solar Applications (FARMS): A Brief Overview of Mechanisms, Performance, and Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Yu; Sengupta, Manajit

    2016-06-01

    Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTM runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy

  13. Quantitative Structure-Use Relationship Model thresholds for Model Validation, Domain of Applicability, and Candidate Alternative Selection

    Data.gov (United States)

    U.S. Environmental Protection Agency — This file contains value of the model training set confusion matrix, domain of applicability evaluation based on training set to predicted chemicals structural...

  14. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  15. Application of Physiographic Soil Erosion–Deposition Model in ...

    Indian Academy of Sciences (India)

    69

    (2010) applied the. 33. WASA-SED model to simulate the runoff, erosion, and transport and retention processes of. 34 .... pure mathematical models used to simulate soil erosion processes based on the conservation. 68 of energy and ...... Chaudhry, M. A., Habib-ur-Rehman, M., Akhtar, M. N., et al., 2014. Modeling sediment.

  16. Applicability of deterministic propagation models for mobile operators

    NARCIS (Netherlands)

    Mantel, O.C.; Oostveen, J.C.; Popova, M.P.

    2007-01-01

    Deterministic propagation models based on ray tracing or ray launching are widely studied in the scientific literature, because of their high accuracy. Also many commercial propagation modelling tools include ray-based models. In spite of this, they are hardly used in commercial operations by

  17. Application of an Aesthetic Evaluation Model to Data Entry Screens.

    Science.gov (United States)

    Ngo, D. C. L.; Byrne, J. G.

    2001-01-01

    Describes a new model for quantitatively assessing screen formats. Results of applying the model to data entry screens support the use of the model. Also described is a critiquing mechanism embedded in a user interface design environment as a demonstration of this approach. (Author/AEF)

  18. A parametric daily precipitation model application in Botswana ...

    African Journals Online (AJOL)

    A parametric precipitation model is developed for generation of daily rainfall time series based on historic data. The precipitation model is a composite model of Markov-chain (MC) and probability distribution (PD). Thirty nine rain gauge stations in Botswana that have daily rainfall record length in the range of 11 to 83 years ...

  19. Review of forest landscape models: types, methods, development and applications

    Science.gov (United States)

    Weimin Xi; Robert N. Coulson; Andrew G. Birt; Zong-Bo Shang; John D. Waldron; Charles W. Lafon; David M. Cairns; Maria D. Tchakerian; Kier D. Klepzig

    2009-01-01

    Forest landscape models simulate forest change through time using spatially referenced data across a broad spatial scale (i.e. landscape scale) generally larger than a single forest stand. Spatial interactions between forest stands are a key component of such models. These models can incorporate other spatio-temporal processes such as...

  20. A Leadership Identity Development Model: Applications from a Grounded Theory

    Science.gov (United States)

    Komives, Susan R.; Mainella, Felicia C.; Longerbeam, Susan D.; Osteen, Laura; Owen, Julie E.

    2006-01-01

    This article describes a stage-based model of leadership identity development (LID) that resulted from a grounded theory study on developing a leadership identity (Komives, Owen, Longerbeam, Mainella, & Osteen, 2005). The LID model expands on the leadership identity stages, integrates the categories of the grounded theory into the LID model, and…