WorldWideScience

Sample records for modeling results based

  1. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    Science.gov (United States)

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  2. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    Science.gov (United States)

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  3. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  4. Dynamic analysis of ITER tokamak. Based on results of vibration test using scaled model

    International Nuclear Information System (INIS)

    Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka

    2005-01-01

    The vibration experiments of the support structures with flexible plates for the ITER major components such as toroidal field coil (TF coil) and vacuum vessel (VV) were performed using small-sized flexible plates aiming to obtain its basic mechanical characteristics such as dependence of the stiffness on the loading angle. The experimental results were compared with the analytical ones in order to estimate an adequate analytical model for ITER support structure with flexible plates. As a result, the bolt connection of the flexible plates on the base plate strongly affected on the stiffness of the flexible plates. After studies of modeling the connection of the bolts, it is found that the analytical results modeling the bolts with finite stiffness only in the axial direction and infinite stiffness in the other directions agree well with the experimental ones. Based on this, numerical analysis regarding the actual support structure of the ITER VV and TF coil was performed. The support structure composed of flexible plates and connection bolts was modeled as a spring composed of only two spring elements simulating the in-plane and out-of-plane stiffness of the support structure with flexible plates including the effect of connection bolts. The stiffness of both spring models for VV and TF coil agree well with that of shell models, simulating actual structures such as flexible plates and connection bolts based on the experimental results. It is therefore found that the spring model with the only two values of stiffness enables to simplify the complicated support structure with flexible plates for the dynamic analysis of the VV and TF coil. Using the proposed spring model, the dynamic analysis of the VV and TF coil for the ITER were performed to estimate the integrity under the design earthquake. As a result, it is found that the maximum relative displacement of 8.6 mm between VV and TF coil is much less than 100 mm, so that the integrity of the VV and TF coil of the

  5. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model

    Science.gov (United States)

    Goradia, C.; Vaughn, J.; Baraona, C. R.

    1980-01-01

    A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).

  6. Global Monthly CO2 Flux Inversion Based on Results of Terrestrial Ecosystem Modeling

    Science.gov (United States)

    Deng, F.; Chen, J.; Peters, W.; Krol, M.

    2008-12-01

    Most of our understanding of the sources and sinks of atmospheric CO2 has come from inverse studies of atmospheric CO2 concentration measurements. However, the number of currently available observation stations and our ability to simulate the diurnal planetary boundary layer evolution over continental regions essentially limit the number of regions that can be reliably inverted globally, especially over continental areas. In order to overcome these restrictions, a nested inverse modeling system was developed based on the Bayesian principle for estimating carbon fluxes of 30 regions in North America and 20 regions for the rest of the globe. Inverse modeling was conducted in monthly steps using CO2 concentration measurements of 5 years (2000 - 2005) with the following two models: (a) An atmospheric transport model (TM5) is used to generate the transport matrix where the diurnal variation n of atmospheric CO2 concentration is considered to enhance the use of the afternoon-hour average CO2 concentration measurements over the continental sites. (b) A process-based terrestrial ecosystem model (BEPS) is used to produce hourly step carbon fluxes, which could minimize the limitation due to our inability to solve the inverse problem in a high resolution, as the background of our inversion. We will present our recent results achieved through a combination of the bottom-up modeling with BEPS and the top-down modeling based on TM5 driven by offline meteorological fields generated by the European Centre for Medium Range Weather Forecast (ECMFW).

  7. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  8. Pedestrian simulation model based on principles of bounded rationality: results of validation tests

    NARCIS (Netherlands)

    Zhu, W.; Timmermans, H.J.P.; Lo, H.P.; Leung, Stephen C.H.; Tan, Susanna M.L.

    2009-01-01

    Over the years, different modelling approaches to simulating pedestrian movement have been suggested. The majority of pedestrian decision models are based on the concept of utility maximization. To explore alternatives, we developed the heterogeneous heuristic model (HHM), based on principles of

  9. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    Science.gov (United States)

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  10. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project

    Directory of Open Access Journals (Sweden)

    Berman Barbara

    2011-03-01

    Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.

  11. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results

    International Nuclear Information System (INIS)

    Demichev, A.; Kryukov, A.; Rodionov, A.

    2002-01-01

    We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)

  12. Comparison of rate theory based modeling calculations with the surveillance test results of Korean light water reactors

    International Nuclear Information System (INIS)

    Lee, Gyeong Geun; Lee, Yong Bok; Kim, Min Chul; Kwon, Junh Yun

    2012-01-01

    Neutron irradiation to reactor pressure vessel (RPV) steels causes a decrease in fracture toughness and an increase in yield strength while in service. It is generally accepted that the growth of point defect cluster (PDC) and copper rich precipitate (CRP) affects radiation hardening of RPV steels. A number of models have been proposed to account for the embrittlement of RPV steels. The rate theory based modeling mathematically described the evolution of radiation induced microstructures of ferritic steels under neutron irradiation. In this work, we compared the rate theory based modeling calculation with the surveillance test results of Korean Light Water Reactors (LWRs)

  13. Effects of naloxone distribution to likely bystanders: Results of an agent-based model.

    Science.gov (United States)

    Keane, Christopher; Egan, James E; Hawk, Mary

    2018-05-01

    Opioid overdose deaths in the US rose dramatically in the past 16 years, creating an urgent national health crisis with no signs of immediate relief. In 2017, the President of the US officially declared the opioid epidemic to be a national emergency and called for additional resources to respond to the crisis. Distributing naloxone to community laypersons and people at high risk for opioid overdose can prevent overdose death, but optimal distribution methods have not yet been pinpointed. We conducted a sequential exploratory mixed methods design using qualitative data to inform an agent-based model to improve understanding of effective community-based naloxone distribution to laypersons to reverse opioid overdose. The individuals in the model were endowed with cognitive and behavioral variables and accessed naloxone via community sites such as pharmacies, hospitals, and urgent-care centers. We compared overdose deaths over a simulated 6-month period while varying the number of distribution sites (0, 1, and 10) and number of kits given to individuals per visit (1 versus 10). Specifically, we ran thirty simulations for each of thirteen distribution models and report average overdose deaths for each. The baseline comparator was no naloxone distribution. Our simulations explored the effects of distribution through syringe exchange sites with and without secondary distribution, which refers to distribution of naloxone kits by laypersons within their social networks and enables ten additional laypersons to administer naloxone to reverse opioid overdose. Our baseline model with no naloxone distribution predicted there would be 167.9 deaths in a six month period. A single distribution site, even with 10 kits picked up per visit, decreased overdose deaths by only 8.3% relative to baseline. However, adding secondary distribution through social networks to a single site resulted in 42.5% fewer overdose deaths relative to baseline. That is slightly higher than the 39

  14. Encouraging Sustainable Transport Choices in American Households: Results from an Empirically Grounded Agent-Based Model

    Directory of Open Access Journals (Sweden)

    Davide Natalini

    2013-12-01

    Full Text Available The transport sector needs to go through an extended process of decarbonisation to counter the threat of climate change. Unfortunately, the International Energy Agency forecasts an enormous growth in the number of cars and greenhouse gas emissions by 2050. Two issues can thus be identified: (1 the need for a new methodology that could evaluate the policy performances ex-ante and (2 the need for more effective policies. To help address these issues, we developed an Agent-Based Model called Mobility USA aimed at: (1 testing whether this could be an effective approach in analysing ex-ante policy implementation in the transport sector; and (2 evaluating the effects of alternative policy scenarios on commuting behaviours in the USA. Particularly, we tested the effects of two sets of policies, namely market-based and preference-change ones. The model results suggest that this type of agent-based approach will provide a useful tool for testing policy interventions and their effectiveness.

  15. Deep Learning Based Solar Flare Forecasting Model. I. Results for Line-of-sight Magnetograms

    Science.gov (United States)

    Huang, Xin; Wang, Huaning; Xu, Long; Liu, Jinfu; Li, Rong; Dai, Xinghua

    2018-03-01

    Solar flares originate from the release of the energy stored in the magnetic field of solar active regions, the triggering mechanism for these flares, however, remains unknown. For this reason, the conventional solar flare forecast is essentially based on the statistic relationship between solar flares and measures extracted from observational data. In the current work, the deep learning method is applied to set up the solar flare forecasting model, in which forecasting patterns can be learned from line-of-sight magnetograms of solar active regions. In order to obtain a large amount of observational data to train the forecasting model and test its performance, a data set is created from line-of-sight magnetogarms of active regions observed by SOHO/MDI and SDO/HMI from 1996 April to 2015 October and corresponding soft X-ray solar flares observed by GOES. The testing results of the forecasting model indicate that (1) the forecasting patterns can be automatically reached with the MDI data and they can also be applied to the HMI data; furthermore, these forecasting patterns are robust to the noise in the observational data; (2) the performance of the deep learning forecasting model is not sensitive to the given forecasting periods (6, 12, 24, or 48 hr); (3) the performance of the proposed forecasting model is comparable to that of the state-of-the-art flare forecasting models, even if the duration of the total magnetograms continuously spans 19.5 years. Case analyses demonstrate that the deep learning based solar flare forecasting model pays attention to areas with the magnetic polarity-inversion line or the strong magnetic field in magnetograms of active regions.

  16. Financial analysis and forecasting of the results of small businesses performance based on regression model

    Directory of Open Access Journals (Sweden)

    Svetlana O. Musienko

    2017-03-01

    Full Text Available Objective to develop the economicmathematical model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies. Methods using comparative analysis the article studies the existing approaches to the construction of the company management models. Applying the regression analysis and the least squares method which is widely used for financial management of enterprises in Russia and abroad the author builds a model of the dependence of revenue on other balance sheet items taking into account the sectoral affiliation of the companies which can be used in the financial analysis and prediction of small enterprisesrsquo performance. Results the article states the need to identify factors affecting the financial management efficiency. The author analyzed scientific research and revealed the lack of comprehensive studies on the methodology for assessing the small enterprisesrsquo management while the methods used for large companies are not always suitable for the task. The systematized approaches of various authors to the formation of regression models describe the influence of certain factors on the company activity. It is revealed that the resulting indicators in the studies were revenue profit or the company relative profitability. The main drawback of most models is the mathematical not economic approach to the definition of the dependent and independent variables. Basing on the analysis it was determined that the most correct is the model of dependence between revenues and total assets of the company using the decimal logarithm. The model was built using data on the activities of the 507 small businesses operating in three spheres of economic activity. Using the presented model it was proved that there is direct dependence between the sales proceeds and the main items of the asset balance as well as differences in the degree of this effect depending on the economic activity of small

  17. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    Science.gov (United States)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.

  18. An Outcrop-based Detailed Geological Model to Test Automated Interpretation of Seismic Inversion Results

    NARCIS (Netherlands)

    Feng, R.; Sharma, S.; Luthi, S.M.; Gisolf, A.

    2015-01-01

    Previously, Tetyukhina et al. (2014) developed a geological and petrophysical model based on the Book Cliffs outcrops that contained eight lithotypes. For reservoir modelling purposes, this model is judged to be too coarse because in the same lithotype it contains reservoir and non-reservoir

  19. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....

  20. Issues in practical model-based diagnosis

    NARCIS (Netherlands)

    Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.

    1993-01-01

    The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of

  1. Modelling of plasma-based dry reforming: how do uncertainties in the input data affect the calculation results?

    Science.gov (United States)

    Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie

    2018-05-01

    One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.

  2. Does folic acid supplementation prevent or promote colorectal cancer? Results from model-based predictions.

    Science.gov (United States)

    Luebeck, E Georg; Moolgavkar, Suresh H; Liu, Amy Y; Boynton, Alanna; Ulrich, Cornelia M

    2008-06-01

    Folate is essential for nucleotide synthesis, DNA replication, and methyl group supply. Low-folate status has been associated with increased risks of several cancer types, suggesting a chemopreventive role of folate. However, recent findings on giving folic acid to patients with a history of colorectal polyps raise concerns about the efficacy and safety of folate supplementation and the long-term health effects of folate fortification. Results suggest that undetected precursor lesions may progress under folic acid supplementation, consistent with the role of folate role in nucleotide synthesis and cell proliferation. To better understand the possible trade-offs between the protective effects due to decreased mutation rates and possibly concomitant detrimental effects due to increased cell proliferation of folic acid, we used a biologically based mathematical model of colorectal carcinogenesis. We predict changes in cancer risk based on timing of treatment start and the potential effect of folic acid on cell proliferation and mutation rates. Changes in colorectal cancer risk in response to folic acid supplementation are likely a complex function of treatment start, duration, and effect on cell proliferation and mutations rates. Predicted colorectal cancer incidence rates under supplementation are mostly higher than rates without folic acid supplementation unless supplementation is initiated early in life (before age 20 years). To the extent to which this model predicts reality, it indicates that the effect on cancer risk when starting folic acid supplementation late in life is small, yet mostly detrimental. Experimental studies are needed to provide direct evidence for this dual role of folate in colorectal cancer and to validate and improve the model predictions.

  3. Modelling an exploited marine fish community with 15 parameters - results from a simple size-based model

    NARCIS (Netherlands)

    Pope, J.G.; Rice, J.C.; Daan, N.; Jennings, S.; Gislason, H.

    2006-01-01

    To measure and predict the response of fish communities to exploitation, it is necessary to understand how the direct and indirect effects of fishing interact. Because fishing and predation are size-selective processes, the potential response can be explored with size-based models. We use a

  4. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  5. Percentile-Based ETCCDI Temperature Extremes Indices for CMIP5 Model Output: New Results through Semiparametric Quantile Regression Approach

    Science.gov (United States)

    Li, L.; Yang, C.

    2017-12-01

    Climate extremes often manifest as rare events in terms of surface air temperature and precipitation with an annual reoccurrence period. In order to represent the manifold characteristics of climate extremes for monitoring and analysis, the Expert Team on Climate Change Detection and Indices (ETCCDI) had worked out a set of 27 core indices based on daily temperature and precipitation data, describing extreme weather and climate events on an annual basis. The CLIMDEX project (http://www.climdex.org) had produced public domain datasets of such indices for data from a variety of sources, including output from global climate models (GCM) participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Among the 27 ETCCDI indices, there are six percentile-based temperature extremes indices that may fall into two groups: exceedance rates (ER) (TN10p, TN90p, TX10p and TX90p) and durations (CSDI and WSDI). Percentiles must be estimated prior to the calculation of the indices, and could more or less be biased by the adopted algorithm. Such biases will in turn be propagated to the final results of indices. The CLIMDEX used an empirical quantile estimator combined with a bootstrap resampling procedure to reduce the inhomogeneity in the annual series of the ER indices. However, there are still some problems remained in the CLIMDEX datasets, namely the overestimated climate variability due to unaccounted autocorrelation in the daily temperature data, seasonally varying biases and inconsistency between algorithms applied to the ER indices and to the duration indices. We now present new results of the six indices through a semiparametric quantile regression approach for the CMIP5 model output. By using the base-period data as a whole and taking seasonality and autocorrelation into account, this approach successfully addressed the aforementioned issues and came out with consistent results. The new datasets cover the historical and three projected (RCP2.6, RCP4.5 and RCP

  6. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  7. Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results

    Science.gov (United States)

    Luo, Ma; Frisken, Sarah F.; Weis, Jared A.; Clements, Logan W.; Unadkat, Prashin; Thompson, Reid C.; Golby, Alexandra J.; Miga, Michael I.

    2017-03-01

    The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation `atlas' containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory `atlas' solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.

  8. Vulnerability of hydropower generation to climate change in China: Results based on Grey forecasting model

    International Nuclear Information System (INIS)

    Wang, Bing; Liang, Xiao-Jie; Zhang, Hao; Wang, Lu; Wei, Yi-Ming

    2014-01-01

    This paper analyzes the long-term relationships between hydropower generation and climate factors (precipitation), hydropower generation capacity (installed capacity of hydropower station) to quantify the vulnerability of renewable energy production in China for the case of hydropower generation. Furthermore, this study applies Grey forecasting model to forecast precipitation in different provinces, and then sets up different scenarios for precipitation based on the IPCC Special Report on Emission Scenarios and results from PRECIS (Providing Regional Climate projections for Impacts Studies) model. The most important result found in this research is the increasing hydropower vulnerability of the poorest regions and the main hydropower generation provinces of China to climate change. Other main empirical results reveal that the impacts of climate change on the supply of hydropower generation in China will be noteworthy for the society. Different scenarios have different effects on hydropower generation, of which A2 scenario (pessimistic, high emission) has the largest. Meanwhile, the impacts of climate change on hydropower generation of every province are distinctly different, of which the Southwest part has the higher vulnerability than the average level while the central part lower. - Highlights: • The hydropower vulnerability will be enlarged with the rapid increase of hydropower capacity. • Modeling the vulnerability of hydropower in different scenarios and different provinces. • The increasing hydropower vulnerability of the poorest regions to climate change. • The increasing hydropower vulnerability of the main hydropower generation provinces. • Rainfall pattern caused by climate change would be the reason for the increasing vulnerability

  9. Model-Based Learning Environment Based on The Concept IPS School-Based Management

    Directory of Open Access Journals (Sweden)

    Hamid Darmadi

    2017-03-01

    Full Text Available The results showed: (1 learning model IPS-oriented environment can grow and not you love the cultural values of the area as a basis for the development of national culture, (2 community participation, and the role of government in implementing learning model of IPS-based environment provides a positive impact for the improvement of management school resources, (3 learning model IPS-based environment effectively creating a way of life together peacefully, increase the intensity of togetherness and mutual respect (4 learning model IPS-based environment can improve student learning outcomes, (5 there are differences in the expression of attitudes and results learning among students who are located in the area of conflict with students who are outside the area of conflict (6 analysis of the scale of attitudes among school students da SMA result rewards high school students to the values of unity and nation, respect for diversity and peaceful coexistence, It is recommended that the Department of Education authority as an institution of Trustees and the development of social and cultural values in the province can apply IPS learning model based environments.

  10. Comparison of results from dispersion models for regulatory purposes based on Gaussian-and Lagrangian-algorithms: an evaluating literature study

    International Nuclear Information System (INIS)

    Walter, H.

    2004-01-01

    Powerful tools to describe atmospheric transport processes for radiation protection can be provided by meteorology; these are atmospheric flow and dispersion models. Concerning dispersion models, Gaussian plume models have been used since a long time to describe atmospheric dispersion processes. Advantages of the Gaussian plume models are short computation time, good validation and broad acceptance worldwide. However, some limitations and their implications on model result interpretation have to be taken into account, as the mathematical derivation of an analytic solution of the equations of motion leads to severe constraints. In order to minimise these constraints, various dispersion models for scientific and regulatory purposes have been developed and applied. Among these the Lagrangian particle models are of special interest, because these models are able to simulate atmospheric transport processes close to reality, e.g. the influence of orography, topography, wind shear and other meteorological phenomena. Within this study, the characteristics and computational results of Gaussian dispersion models as well as of Lagrangian models have been compared and evaluated on the base of numerous papers and reports published in literature. Special emphasis has been laid on the intention that dispersion models should comply with EU requests (Richtlinie 96/29/Euratom, 1996) on a more realistic assessment of the radiation exposure to the population. (orig.)

  11. Three-dimensional model of plate geometry and velocity model for Nankai Trough seismogenic zone based on results from structural studies

    Science.gov (United States)

    Nakanishi, A.; Shimomura, N.; Kodaira, S.; Obana, K.; Takahashi, T.; Yamamoto, Y.; Yamashita, M.; Takahashi, N.; Kaneda, Y.

    2012-12-01

    In the Nankai Trough subduction seismogenic zone, the Nankai and Tonankai earthquakes had often occurred simultaneously, and caused a great event. In order to reduce a great deal of damage to coastal area from both strong ground motion and tsunami generation, it is necessary to understand rupture synchronization and segmentation of the Nankai megathrust earthquake. For a precise estimate of the rupture zone of the Nankai megathrust event based on the knowledge of realistic earthquake cycle and variation of magnitude, it is important to know the geometry and property of the plate boundary of the subduction seismogenic zone. To improve a physical model of the Nankai Trough seismogenic zone, the large-scale high-resolution wide-angle and reflection (MCS) seismic study, and long-term observation has been conducted since 2008. Marine active source seismic data have been acquired along grid two-dimensional profiles having the total length of ~800km every year. A three-dimensional seismic tomography using active and passive seismic data observed both land and ocean bottom stations have been also performed. From those data, we found that several strong lateral variations of the subducting Philippine Sea plate and overriding plate corresponding to margins of coseismic rupture zone of historical large event occurred along the Nankai Trough. Particularly a possible prominent reflector for the forearc Moho is recently imaged in the offshore side in the Kii channel at the depth of ~18km which is shallower than those of other area along the Nankai Trough. Such a drastic variation of the overriding plate might be related to the existence of the segmentation of the Nankai megathrust earthquake. Based on our results derived from seismic studies, we have tried to make a geometrical model of the Philippine Sea plate and a three-dimensional velocity structure model of the Nankai Trough seismogenic zone. In this presentation, we will summarize major results of out seismic studies, and

  12. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  13. Modelling Extortion Racket Systems: Preliminary Results

    Science.gov (United States)

    Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria

    Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.

  14. Study on driver model for hybrid truck based on driving simulator experimental results

    Directory of Open Access Journals (Sweden)

    Dam Hoang Phuc

    2018-04-01

    Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1

  15. Incident Duration Modeling Using Flexible Parametric Hazard-Based Models

    Directory of Open Access Journals (Sweden)

    Ruimin Li

    2014-01-01

    Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.

  16.  Functional Results-Oriented Healthcare Leadership: A Novel Leadership Model

    Directory of Open Access Journals (Sweden)

    Salem Said Al-Touby

    2012-03-01

    Full Text Available  This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  17. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  18. Storm-time ring current: model-dependent results

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2012-01-01

    Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.

  19. Experimental and modelling results of a parallel-plate based active magnetic regenerator

    DEFF Research Database (Denmark)

    Tura, A.; Nielsen, Kaspar Kirstein; Rowe, A.

    2012-01-01

    The performance of a permanent magnet magnetic refrigerator (PMMR) using gadolinium parallel plates is described. The configuration and operating parameters are described in detail. Experimental results are compared to simulations using an established twodimensional model of an active magnetic...

  20. Model-Based Knowing: How Do Students Ground Their Understanding About Climate Systems in Agent-Based Computer Models?

    Science.gov (United States)

    Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.

    2017-12-01

    This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.

  1. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  2. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  3. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  5. Effects of Problem-Based Learning Model versus Expository Model and Motivation to Achieve for Student's Physic Learning Result of Senior High School at Class XI

    Science.gov (United States)

    Prayekti

    2016-01-01

    "Problem-based learning" (PBL) is one of an innovative learning model which can provide an active learning to student, include the motivation to achieve showed by student when the learning is in progress. This research is aimed to know: (1) differences of physic learning result for student group which taught by PBL versus expository…

  6. Measurement-based reliability/performability models

    Science.gov (United States)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  7. The Design of Model-Based Training Programs

    Science.gov (United States)

    Polson, Peter; Sherry, Lance; Feary, Michael; Palmer, Everett; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)

    1997-01-01

    This paper proposes a model-based training program for the skills necessary to operate advance avionics systems that incorporate advanced autopilots and fight management systems. The training model is based on a formalism, the operational procedure model, that represents the mission model, the rules, and the functions of a modem avionics system. This formalism has been defined such that it can be understood and shared by pilots, the avionics software, and design engineers. Each element of the software is defined in terms of its intent (What?), the rationale (Why?), and the resulting behavior (How?). The Advanced Computer Tutoring project at Carnegie Mellon University has developed a type of model-based, computer aided instructional technology called cognitive tutors. They summarize numerous studies showing that training times to a specified level of competence can be achieved in one third the time of conventional class room instruction. We are developing a similar model-based training program for the skills necessary to operation the avionics. The model underlying the instructional program and that simulates the effects of pilots entries and the behavior of the avionics is based on the operational procedure model. Pilots are given a series of vertical flightpath management problems. Entries that result in violations, such as failure to make a crossing restriction or violating the speed limits, result in error messages with instruction. At any time, the flightcrew can request suggestions on the appropriate set of actions. A similar and successful training program for basic skills for the FMS on the Boeing 737-300 was developed and evaluated. The results strongly support the claim that the training methodology can be adapted to the cockpit.

  8. Ocean EcoSystem Modelling Based on Observations from Satellite and In-Situ Data: First Results from the OSMOSIS Project

    Science.gov (United States)

    Rio, M.-H.; Buongiorno-Nardelli, B.; Calmettes, B.; Conchon, A.; Droghei, R.; Guinehut, S.; Larnicol, G.; Lehodey, P.; Matthieu, P. P.; Mulet, S.; Santoleri, R.; Senina, I.; Stum, J.; Verbrugge, N.

    2015-12-01

    Micronekton organisms are both the prey of large ocean predators, and themselves also the predators of eggs and larvae of many species from which most fishes. The micronekton biomass concentration is therefore a key explanatory variable that is usually missing in fish population and ecosystem models to understand individual behaviour and population dynamics of large oceanic predators. In that context, the OSMOSIS (Ocean ecoSystem Modelling based on Observations from Satellite and In-Situ data) ESA project aims at demonstrating the feasibility and prototyping an integrated system going from the synergetic use of many different variables measured from space to the modelling of the distribution of micronektonic organisms. In this paper, we present how data from CRYOSAT, GOCE, SMOS, ENVISAT, together with other non-ESA satellites and in-situ data, can be merged to provide the required key variables needed as input of the micronekton model. Also, first results from the optimization of the micronekton model are presented and discussed.

  9. EPR-based material modelling of soils

    Science.gov (United States)

    Faramarzi, Asaad; Alani, Amir M.

    2013-04-01

    In the past few decades, as a result of the rapid developments in computational software and hardware, alternative computer aided pattern recognition approaches have been introduced to modelling many engineering problems, including constitutive modelling of materials. The main idea behind pattern recognition systems is that they learn adaptively from experience and extract various discriminants, each appropriate for its purpose. In this work an approach is presented for developing material models for soils based on evolutionary polynomial regression (EPR). EPR is a recently developed hybrid data mining technique that searches for structured mathematical equations (representing the behaviour of a system) using genetic algorithm and the least squares method. Stress-strain data from triaxial tests are used to train and develop EPR-based material models for soil. The developed models are compared with some of the well-known conventional material models and it is shown that EPR-based models can provide a better prediction for the behaviour of soils. The main benefits of using EPR-based material models are that it provides a unified approach to constitutive modelling of all materials (i.e., all aspects of material behaviour can be implemented within a unified environment of an EPR model); it does not require any arbitrary choice of constitutive (mathematical) models. In EPR-based material models there are no material parameters to be identified. As the model is trained directly from experimental data therefore, EPR-based material models are the shortest route from experimental research (data) to numerical modelling. Another advantage of EPR-based constitutive model is that as more experimental data become available, the quality of the EPR prediction can be improved by learning from the additional data, and therefore, the EPR model can become more effective and robust. The developed EPR-based material models can be incorporated in finite element (FE) analysis.

  10. CEAI: CCM-based email authorship identification model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2013-11-01

    Full Text Available In this paper we present a model for email authorship identification (EAI by employing a Cluster-based Classification (CCM technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature set to include some more interesting and effective features for email authorship identification (e.g., the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell. We also included Info Gain feature selection based content features. It is observed that the use of such features in the authorship identification process has a positive impact on the accuracy of the authorship identification task. We performed experiments to justify our arguments and compared the results with other base line models. Experimental results reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM-based models, as well as the models proposed by Iqbal et al. (2010, 2013 [1,2]. The proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5% accuracy has been achieved on authors’ constructed real email dataset. The results on Enron dataset have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1,2].

  11. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  12. Modelling Inter-Particle Forces and Resulting Agglomerate Sizes in Cement-Based Materials

    DEFF Research Database (Denmark)

    Kjeldsen, Ane Mette; Geiker, Mette Rica

    2005-01-01

    The theory of inter-particle forces versus external shear in cement-based materials is reviewed. On this basis, calculations on maximum agglomerate size present after the combined action of superplasticizers and shear are carried out. Qualitative experimental results indicate that external shear ...

  13. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    Full Text Available In the frame of the project "LuFo iPort VIS" which focuses on the implementation of a site-specific visibility forecast, a field campaign was organised to offer detailed information to a numerical fog model. As part of additional observing activities, a 22-channel microwave radiometer profiler (MWRP was operating at the Munich Airport site in Germany from October 2011 to February 2012 in order to provide vertical temperature and humidity profiles as well as cloud liquid water information. Independently from the model-related aims of the campaign, the MWRP observations were used to study their capabilities to work in operational meteorological networks. Over the past decade a growing quantity of MWRP has been introduced and a user community (MWRnet was established to encourage activities directed at the set up of an operational network. On that account, the comparability of observations from different network sites plays a fundamental role for any applications in climatology and numerical weather forecast. In practice, however, systematic temperature and humidity differences (bias between MWRP retrievals and co-located radiosonde profiles were observed and reported by several authors. This bias can be caused by instrumental offsets and by the absorption model used in the retrieval algorithms as well as by applying a non-representative training data set. At the Lindenberg observatory, besides a neural network provided by the manufacturer, a measurement-based regression method was developed to reduce the bias. These regression operators are calculated on the basis of coincident radiosonde observations and MWRP brightness temperature (TB measurements. However, MWRP applications in a network require comparable results at just any site, even if no radiosondes are available. The motivation of this work is directed to a verification of the suitability of the operational local forecast model COSMO-EU of the Deutscher Wetterdienst (DWD for the calculation

  14. Probabilistic Model-based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Anderson, Jakob; Prehn, Thomas

    2005-01-01

    is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  15. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  16. Reconstructing Holocene climate using a climate model: Model strategy and preliminary results

    Science.gov (United States)

    Haberkorn, K.; Blender, R.; Lunkeit, F.; Fraedrich, K.

    2009-04-01

    An Earth system model of intermediate complexity (Planet Simulator; PlaSim) is used to reconstruct Holocene climate based on proxy data. The Planet Simulator is a user friendly general circulation model (GCM) suitable for palaeoclimate research. Its easy handling and the modular structure allow for fast and problem dependent simulations. The spectral model is based on the moist primitive equations conserving momentum, mass, energy and moisture. Besides the atmospheric part, a mixed layer-ocean with sea ice and a land surface with biosphere are included. The present-day climate of PlaSim, based on an AMIP II control-run (T21/10L resolution), shows reasonable agreement with ERA-40 reanalysis data. Combining PlaSim with a socio-technological model (GLUES; DFG priority project INTERDYNAMIK) provides improved knowledge on the shift from hunting-gathering to agropastoral subsistence societies. This is achieved by a data assimilation approach, incorporating proxy time series into PlaSim to initialize palaeoclimate simulations during the Holocene. For this, the following strategy is applied: The sensitivities of the terrestrial PlaSim climate are determined with respect to sea surface temperature (SST) anomalies. Here, the focus is the impact of regionally varying SST both in the tropics and the Northern Hemisphere mid-latitudes. The inverse of these sensitivities is used to determine the SST conditions necessary for the nudging of land and coastal proxy climates. Preliminary results indicate the potential, the uncertainty and the limitations of the method.

  17. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  18. An Investigation Of The Influence Of Leadership And Processes On Basic Performance Results Using A Decision Model Based On Efqm

    Directory of Open Access Journals (Sweden)

    Ahmet Talat İnan

    2013-06-01

    Full Text Available EFQM Excellence Model is a quality approach that companies benefit in achieving success. EFQM Excellence Model is an assessment tool helping to determine what is competence and missing aspects in achieving excellence.In this study, based on the EFQM Excellence Model, the influence of basic performance results caused by leadership and processes variables in this model of a firm engaged in maintenance and repair services due to a large-scale company. In this work, a survey was conducted that covering the company's employees and managers. The data obtained from this survey was utilized by using SPSS16.0 statistics software in respect of factor analysis, reliability analysis, correlation and regression analysis. The relation between variables was evaluated taking into account the resuşts of analysis.

  19. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  20. Space Launch System Base Heating Test: Experimental Operations & Results

    Science.gov (United States)

    Dufrene, Aaron; Mehta, Manish; MacLean, Matthew; Seaford, Mark; Holden, Michael

    2016-01-01

    NASA's Space Launch System (SLS) uses four clustered liquid rocket engines along with two solid rocket boosters. The interaction between all six rocket exhaust plumes will produce a complex and severe thermal environment in the base of the vehicle. This work focuses on a recent 2% scale, hot-fire SLS base heating test. These base heating tests are short-duration tests executed with chamber pressures near the full-scale values with gaseous hydrogen/oxygen engines and RSRMV analogous solid propellant motors. The LENS II shock tunnel/Ludwieg tube tunnel was used at or near flight duplicated conditions up to Mach 5. Model development was based on the Space Shuttle base heating tests with several improvements including doubling of the maximum chamber pressures and duplication of freestream conditions. Test methodology and conditions are presented, and base heating results from 76 runs are reported in non-dimensional form. Regions of high heating are identified and comparisons of various configuration and conditions are highlighted. Base pressure and radiometer results are also reported.

  1. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  2. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    Science.gov (United States)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  3. 3-D model-based vehicle tracking.

    Science.gov (United States)

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  4. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  5. Result-Based Public Governance

    DEFF Research Database (Denmark)

    Boll, Karen

    Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution-produced ef......Within the public sector, many institutions are either steered by governance by targets or result-based governance. The former sets up quantitative internal production targets, while the latter advocates that production is planned according to outcomes which are defined as institution......-produced effects on individuals or businesses in society; effects which are often produced by ‘nudging’ the citizenry in a certain direction. With point of departure in these two governance-systems, the paper explores a case of controversial inspection of businesses’ negative VAT accounts and it describes...... explores how and why this state of affairs appears and problematizes the widespread use of result-based governance and nudging-techniques by public sector institutions....

  6. Model-Based Requirements Management in Gear Systems Design Based On Graph-Based Design Languages

    Directory of Open Access Journals (Sweden)

    Kevin Holder

    2017-10-01

    specification list and were analyzed in detail. As a second basis, the research method uses a conscious expansion of graph-based design languages towards their applicability for requirements management. This expansion allows the handling of requirements through a graph-based design language model. The first two results of the presented research consist of a model of the gear system and a detailed model of requirements, both modelled in a graph-based design language. Further results are generated by a combination of the two models into one holistic model.

  7. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek; Mü nch, Andreas; Sü li, Endre; Wagner, Barbara

    2016-01-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  8. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  9. Testing R&D-Based Endogenous Growth Models

    DEFF Research Database (Denmark)

    Kruse-Andersen, Peter Kjær

    2017-01-01

    R&D-based growth models are tested using US data for the period 1953-2014. A general growth model is developed which nests the model varieties of interest. The model implies a cointegrating relationship between multifactor productivity, research intensity, and employment. This relationship...... is estimated using cointegrated VAR models. The results provide evidence against the widely used fully endogenous variety and in favor of the semi-endogenous variety. Forecasts based on the empirical estimates suggest that the slowdown in US productivity growth will continue. Particularly, the annual long...

  10. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  11. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  12. Model-based Abstraction of Data Provenance

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  13. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  14. Atmospheric greenhouse gases retrieved from SCIAMACHY: comparison to ground-based FTS measurements and model results

    Directory of Open Access Journals (Sweden)

    O. Schneising

    2012-02-01

    Full Text Available SCIAMACHY onboard ENVISAT (launched in 2002 enables the retrieval of global long-term column-averaged dry air mole fractions of the two most important anthropogenic greenhouse gases carbon dioxide and methane (denoted XCO2 and XCH4. In order to assess the quality of the greenhouse gas data obtained with the recently introduced v2 of the scientific retrieval algorithm WFM-DOAS, we present validations with ground-based Fourier Transform Spectrometer (FTS measurements and comparisons with model results at eight Total Carbon Column Observing Network (TCCON sites providing realistic error estimates of the satellite data. Such validation is a prerequisite to assess the suitability of data sets for their use in inverse modelling.

    It is shown that there are generally no significant differences between the carbon dioxide annual increases of SCIAMACHY and the assimilation system CarbonTracker (2.00 ± 0.16 ppm yr−1 compared to 1.94 ± 0.03 ppm yr−1 on global average. The XCO2 seasonal cycle amplitudes derived from SCIAMACHY are typically larger than those from TCCON which are in turn larger than those from CarbonTracker. The absolute values of the northern hemispheric TCCON seasonal cycle amplitudes are closer to SCIAMACHY than to CarbonTracker and the corresponding differences are not significant when compared with SCIAMACHY, whereas they can be significant for a subset of the analysed TCCON sites when compared with CarbonTracker. At Darwin we find discrepancies of the seasonal cycle derived from SCIAMACHY compared to the other data sets which can probably be ascribed to occurrences of undetected thin clouds. Based on the comparison with the reference data, we conclude that the carbon dioxide data set can be characterised by a regional relative precision (mean standard deviation of the differences of about 2.2 ppm and a relative accuracy (standard deviation of the mean differences

  15. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...

  16. Segment-based Eyring-Wilson viscosity model for polymer solutions

    International Nuclear Information System (INIS)

    Sadeghi, Rahmat

    2005-01-01

    A theory-based model is presented for correlating viscosity of polymer solutions and is based on the segment-based Eyring mixture viscosity model as well as the segment-based Wilson model for describing deviations from ideality. The model has been applied to several polymer solutions and the results show that it is reliable both for correlation and prediction of the viscosity of polymer solutions at different molar masses and temperature of the polymer

  17. Application of the IPCC model to a Brazilian landfill: First results

    International Nuclear Information System (INIS)

    Penteado, Roger; Cavalli, Massimo; Magnano, Enrico; Chiampo, Fulvia

    2012-01-01

    The Intergovernmental Panel on Climate Change gave a methodology to estimate the methane emissions from Municipal Solid Wastes landfills, based on a First Order Decay (FOD) model that assumes biodegradation kinetics depending on the type of wastes. This model can be used to estimate both the National greenhouse gas emissions in the industrialized countries as well as the reductions of these emissions in the developing ones when the Clean Development Mechanism, as defined by the Kyoto Protocol, is implemented. In this paper, the FOD model has been use to evaluate the biogas flow rates emitted by a Brazilian landfill and the results have been compared to the extracted ones: some first results can be useful to evidence the weight of key parameters and do a correct use of the model. - Highlights: ► Landfill biogas is greenhouse gas and fuel at the same time. ► In developing countries its collection can implement Kyoto Protocol mechanisms. ► Biogas collection and exploiting become part of energy policy. ► Project economical balance is based on reliable estimates of generated quantities.

  18. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  19. Implications of the Abolition of Milk Quota System for Polish Agriculture – Simulation Results Based on the AG MEMOD Model

    Directory of Open Access Journals (Sweden)

    Mariusz Hamulczuk

    2009-09-01

    Full Text Available The objective of the study was to asses the economics effects of the dairy policy reform sanctioned by CAP Health Check on the agricultural market in Poland. The paper presents a theoretical study of the production control program as well as a model based quantitative analysis of the implications of the reform on the agricultural markets. The partial equilibrium model AGMEMOD was used for simulation. The results obtained indicate that the expansion and subsequently the elimination of milk quota system lead to the growth of milk production and consumption in Poland which confirms the hypothesis derived from theoretical study. As a consequence, the growth of the production of most of dairy products and the decrease of their prices is expected. As the growth of dairy consumption is smaller than the growth of milk production the increase of self-sufficiency in the dairy market is predicted. The comparison of the scale of price adjustment resulting from the dairy reform to the market price changes observed recently leads to the conclusion that global market factors will probably be more important for the future development of milk production and prices in Poland than the milk quota abolition. Nevertheless, the reform constitutes a significant change in business conditions for producers and consumers of milk and dairy products. As a consequence, milk production will become more market based, as far as market prices, production costs and milk yields are concerned. Simulation results from the AGMEMOD model confirm the opinion brought by other authors that the abolition of milk quotas will lead to the decline of dairy farmer income. The main beneficiaries of the reform would become the consumers who could take advantage of the decline in prices of the dairy products.

  20. Identification of Anisotropic Criteria for Stratified Soil Based on Triaxial Tests Results

    Science.gov (United States)

    Tankiewicz, Matylda; Kawa, Marek

    2017-09-01

    The paper presents the identification methodology of anisotropic criteria based on triaxial test results. The considered material is varved clay - a sedimentary soil occurring in central Poland which is characterized by the so-called "layered microstructure". The strength examination outcomes were identified by standard triaxial tests. The results include the estimated peak strength obtained for a wide range of orientations and confining pressures. Two models were chosen as potentially adequate for the description of the tested material, namely Pariseau and its conjunction with the Jaeger weakness plane. Material constants were obtained by fitting the model to the experimental results. The identification procedure is based on the least squares method. The optimal values of parameters are searched for between specified bounds by sequentially decreasing the distance between points and reducing the length of the searched range. For both considered models the optimal parameters have been obtained. The comparison of theoretical and experimental results as well as the assessment of the suitability of selected criteria for the specified range of confining pressures are presented.

  1. Model-based testing for software safety

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2017-01-01

    Testing safety-critical systems is crucial since a failure or malfunction may result in death or serious injuries to people, equipment, or environment. An important challenge in testing is the derivation of test cases that can identify the potential faults. Model-based testing adopts models of a

  2. Automated extraction of knowledge for model-based diagnostics

    Science.gov (United States)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  3. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  4. Quality prediction modeling for sintered ores based on mechanism models of sintering and extreme learning machine based error compensation

    Science.gov (United States)

    Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang

    2018-06-01

    Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.

  5. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  6. Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun

    2017-03-01

    The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.

  7. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    Directory of Open Access Journals (Sweden)

    Christopher Heine

    2014-08-01

    Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.

  8. A subchannel based annular flow dryout model

    International Nuclear Information System (INIS)

    Hammouda, Najmeddine; Cheng, Zhong; Rao, Yanfei F.

    2016-01-01

    Highlights: • A modified annular flow dryout model for subchannel thermalhydraulic analysis. • Implementation of the model in Canadian subchannel code ASSERT-PV. • Assessment of the model against tube CHF experiments. • Assessment of the model against CANDU-bundle CHF experiments. - Abstract: This paper assesses a popular tube-based mechanistic critical heat flux model (Hewitt and Govan’s annular flow model (based on the model of Whalley et al.), and modifies and implements the model for bundle geometries. It describes the results of the ASSERT subchannel code predictions using the modified model, as applied to a single tube and the 28-element, 37-element and 43-element (CANFLEX) CANDU bundles. A quantitative comparison between the model predictions and experimental data indicates good agreement for a wide range of flow conditions. The comparison has resulted in an overall average error of −0.15% and an overall root-mean-square error of 5.46% with tube data representing annular film dryout type critical heat flux, and in an overall average error of −0.9% and an overall RMS error of 9.9% with Stern Laboratories’ CANDU-bundle data.

  9. A pedagogical model for simulation-based learning in healthcare

    Directory of Open Access Journals (Sweden)

    Tuulikki Keskitalo

    2015-11-01

    Full Text Available The aim of this study was to design a pedagogical model for a simulation-based learning environment (SBLE in healthcare. Currently, simulation and virtual reality are a major focus in healthcare education. However, when and how these learning environments should be applied is not well-known. The present study tries to fill that gap. We pose the following research question: What kind of pedagogical model supports and facilitates students’ meaningful learning in SBLEs? The study used design-based research (DBR and case study approaches. We report the results from our second case study and how the pedagogical model was developed based on the lessons learned. The study involved nine facilitators and 25 students. Data were collected and analysed using mixed methods. The main result of this study is the refined pedagogical model. The model is based on the socio-cultural theory of learning and characteristics of meaningful learning as well as previous pedagogical models. The model will provide a more holistic and meaningful approach to teaching and learning in SBLEs. However, the model requires evidence and further development.

  10. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK

    Science.gov (United States)

    Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.

    2014-12-01

    Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.

  11. On-line monitoring and modelling based process control of high rate nitrification - lab scale experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Pirsing, A. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Wiesmann, U. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Kelterbach, G. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Schaffranietz, U. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Roeck, H. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Eichner, B. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Szukal, S. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Schulze, G. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie

    1996-09-01

    This paper presents a new concept for the control of nitrification in highly polluted waste waters. The approach is based on mathematical modelling. To determine the substrate degradation rates of the microorganisms involved, a mathematical model using gas measurement is used. A fuzzy-controller maximises the capacity utilisation efficiencies. The experiments carried out in a lab-scale reactor demonstrate that even with highly varying ammonia concentrations in the influent, the nitrogen concentrations in the effluent can be kept within legal limits. (orig.). With 11 figs.

  12. Individual-based modeling of fish: Linking to physical models and water quality.

    Energy Technology Data Exchange (ETDEWEB)

    Rose, K.A.

    1997-08-01

    The individual-based modeling approach for the simulating fish population and community dynamics is gaining popularity. Individual-based modeling has been used in many other fields, such as forest succession and astronomy. The popularity of the individual-based approach is partly a result of the lack of success of the more aggregate modeling approaches traditionally used for simulating fish population and community dynamics. Also, recent recognition that it is often the atypical individual that survives has fostered interest in the individual-based approach. Two general types of individual-based models are distribution and configuration. Distribution models follow the probability distributions of individual characteristics, such as length and age. Configuration models explicitly simulate each individual; the sum over individuals being the population. DeAngelis et al (1992) showed that, when distribution and configuration models were formulated from the same common pool of information, both approaches generated similar predictions. The distribution approach was more compact and general, while the configuration approach was more flexible. Simple biological changes, such as making growth rate dependent on previous days growth rates, were easy to implement in the configuration version but prevented simple analytical solution of the distribution version.

  13. Springer handbook of model-based science

    CERN Document Server

    Bertolotti, Tommaso

    2017-01-01

    The handbook offers the first comprehensive reference guide to the interdisciplinary field of model-based reasoning. It highlights the role of models as mediators between theory and experimentation, and as educational devices, as well as their relevance in testing hypotheses and explanatory functions. The Springer Handbook merges philosophical, cognitive and epistemological perspectives on models with the more practical needs related to the application of this tool across various disciplines and practices. The result is a unique, reliable source of information that guides readers toward an understanding of different aspects of model-based science, such as the theoretical and cognitive nature of models, as well as their practical and logical aspects. The inferential role of models in hypothetical reasoning, abduction and creativity once they are constructed, adopted, and manipulated for different scientific and technological purposes is also discussed. Written by a group of internationally renowned experts in ...

  14. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  15. Some important results from the air pollution distribution model STACKS (1988-1992)

    International Nuclear Information System (INIS)

    Erbrink, J.J.

    1993-01-01

    Attention is paid to the results of the study on the distribution of air pollutants by high chimney-stacks of electric power plants. An important product of the study is the integrated distribution model STACKS (Short Term Air-pollutant Concentrations Kema modelling System). The improvements and the extensions of STACKS are described in relation to the National Model, which has been used to estimate the environmental effects of individual chimney-stacks. The National Model shows unacceptable variations for high pollutant sources. Based on the results of STACKS revision of the National model has been taken into consideration. By means of the revised National Model a more realistic estimation of the environmental effects of electric power plants can be carried out

  16. Modeling and knowledge acquisition processes using case-based inference

    Directory of Open Access Journals (Sweden)

    Ameneh Khadivar

    2017-03-01

    Full Text Available The method of acquisition and presentation of the organizational Process Knowledge has considered by many KM researches. In this research a model for process knowledge acquisition and presentation has been presented by using the approach of Case Base Reasoning. The validation of the presented model was evaluated by conducting an expert panel. Then a software has been developed based on the presented model and implemented in Eghtesad Novin Bank of Iran. In this company, based on the stages of the presented model, first the knowledge intensive processes has been identified, then the Process Knowledge was stored in a knowledge base in the format of problem/solution/consequent .The retrieval of the knowledge was done based on the similarity of the nearest neighbor algorithm. For validating of the implemented system, results of the system has compared by the results of the decision making of the expert of the process.

  17. Comparisons of complex network based models and real train flow model to analyze Chinese railway vulnerability

    International Nuclear Information System (INIS)

    Ouyang, Min; Zhao, Lijing; Hong, Liu; Pan, Zhezhe

    2014-01-01

    Recently numerous studies have applied complex network based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards. But how effective are these models to capture their real performance response is still a question worthy of research. Taking the Chinese railway system as an example, this paper selects three typical complex network based models, including purely topological model (PTM), purely shortest path model (PSPM), and weight (link length) based shortest path model (WBSPM), to analyze railway accessibility and flow-based vulnerability and compare their results with those from the real train flow model (RTFM). The results show that the WBSPM can produce the train routines with 83% stations and 77% railway links identical to the real routines and can approach the RTFM the best for railway vulnerability under both single and multiple component failures. The correlation coefficient for accessibility vulnerability from WBSPM and RTFM under single station failures is 0.96 while it is 0.92 for flow-based vulnerability; under multiple station failures, where each station has the same failure probability fp, the WBSPM can produce almost identical vulnerability results with those from the RTFM under almost all failures scenarios when fp is larger than 0.62 for accessibility vulnerability and 0.86 for flow-based vulnerability

  18. Evaluating Emulation-based Models of Distributed Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives

    2017-08-01

    Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.

  19. Spreading of intolerance under economic stress: Results from a reputation-based model

    Science.gov (United States)

    Martinez-Vaquero, Luis A.; Cuesta, José A.

    2014-08-01

    When a population is engaged in successive prisoner's dilemmas, indirect reciprocity through reputation fosters cooperation through the emergence of moral and action rules. A simplified model has recently been proposed where individuals choose between helping others or not and are judged good or bad for it by the rest of the population. The reputation so acquired will condition future actions. In this model, eight strategies (referred to as "leading eight") enforce a high level of cooperation, generate high payoffs, and are therefore resistant to invasions by other strategies. Here we show that, by assigning each individual one of two labels that peers can distinguish (e.g., political ideas, religion, and skin color) and allowing moral and action rules to depend on the label, intolerant behaviors can emerge within minorities under sufficient economic stress. We analyze the sets of conditions where this can happen and also discuss the circumstances under which tolerance can be restored. Our results agree with empirical observations that correlate intolerance and economic stress and predict a correlation between the degree of tolerance of a population and its composition and ethical stance.

  20. Using Evidence Based Practice in LIS Education: Results of a Test of a Communities of Practice Model

    Directory of Open Access Journals (Sweden)

    Joyce Yukawa

    2010-03-01

    Full Text Available Objective ‐ This study investigated the use of a communities of practice (CoP model for blended learning in library and information science (LIS graduate courses. The purposes were to: (1 test the model’s efficacy in supporting student growth related to core LIS concepts, practices, professional identity, and leadership skills, and (2 develop methods for formative and summative assessment using the model.Methods ‐ Using design‐based research principles to guide the formative and summative assessments, pre‐, mid‐, and post‐course questionnaires were constructed to test the model and administered to students in three LIS courses taught by the author. Participation was voluntary and anonymous. A total of 34 students completed the three courses; response rate for the questionnaires ranged from 47% to 95%. The pre‐course questionnaire addressed attitudes toward technology and the use of technology for learning. The mid‐course questionnaire addressed strengths and weaknesses of the course and suggestions for improvement. The post‐course questionnaire addressed what students valued about their learning and any changes in attitude toward technology for learning. Data were analyzed on three levels. Micro‐level analysis addressed technological factors related to usability and participant skills and attitudes. Meso‐level analysis addressed social and pedagogical factors influencing community learning. Macro‐level analysis addressed CoP learning outcomes, namely, knowledge of core concepts and practices, and the development of professional identity and leadership skills.Results ‐ The students can be characterized as adult learners who were neither early nor late adopters of technology. At the micro‐level, responses indicate that the online tools met high standards of usability and effectively supported online communication and learning. Moreover, the increase in positive attitudes toward the use of technology for learning at

  1. Combustion synthesis of TiB2-based cermets: modeling and experimental results

    International Nuclear Information System (INIS)

    Martinez Pacheco, M.; Bouma, R.H.B.; Katgerman, L.

    2008-01-01

    TiB 2 -based cermets are prepared by combustion synthesis followed by a pressing stage in a granulate medium. Products obtained by combustion synthesis are characterized by a large remaining porosity (typically 50%). To produce dense cermets, a subsequent densification step is performed after the combustion process and when the reacted material is still hot. To design the process, numerical simulations are carried out and compared to experimental results. In addition, physical and electrical properties of the products related to electrical contact applications are evaluated. (orig.)

  2. Fuzzy rule-based model for hydropower reservoirs operation

    Energy Technology Data Exchange (ETDEWEB)

    Moeini, R.; Afshar, A.; Afshar, M.H. [School of Civil Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2011-02-15

    Real-time hydropower reservoir operation is a continuous decision-making process of determining the water level of a reservoir or the volume of water released from it. The hydropower operation is usually based on operating policies and rules defined and decided upon in strategic planning. This paper presents a fuzzy rule-based model for the operation of hydropower reservoirs. The proposed fuzzy rule-based model presents a set of suitable operating rules for release from the reservoir based on ideal or target storage levels. The model operates on an 'if-then' principle, in which the 'if' is a vector of fuzzy premises and the 'then' is a vector of fuzzy consequences. In this paper, reservoir storage, inflow, and period are used as premises and the release as the consequence. The steps involved in the development of the model include, construction of membership functions for the inflow, storage and the release, formulation of fuzzy rules, implication, aggregation and defuzzification. The required knowledge bases for the formulation of the fuzzy rules is obtained form a stochastic dynamic programming (SDP) model with a steady state policy. The proposed model is applied to the hydropower operation of ''Dez'' reservoir in Iran and the results are presented and compared with those of the SDP model. The results indicate the ability of the method to solve hydropower reservoir operation problems. (author)

  3. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  4. Elastoplastic cup model for cement-based materials

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2010-03-01

    Full Text Available Based on experimental data obtained from triaxial tests and a hydrostatic test, a cup model was formulated. Two plastic mechanisms, respectively a deviatoric shearing and a pore collapse, are taken into account. This model also considers the influence of confining pressure. In this paper, the calibration of the model is detailed and numerical simulations of the main mechanical behavior of cement paste over a large range of stress are described, showing good agreement with experimental results. The case study shows that this cup model has extensive applicability for cement-based materials and other quasi-brittle and high-porosity materials in a complex stress state.

  5. Flying Training Capacity Model: Initial Results

    National Research Council Canada - National Science Library

    Lynch, Susan

    2005-01-01

    OBJECTIVE: (1) Determine the flying training capacity for 6 bases: * Sheppard AFB * Randolph AFB * Moody AFB * Columbus AFB * Laughlin AFB * Vance AFB * (2) Develop versatile flying training capacity simulation model for AETC...

  6. Dependability modeling and assessment in UML-based software development.

    Science.gov (United States)

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  7. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  8. Polyphonic Piano Transcription with a Note-Based Music Language Model

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2018-03-01

    Full Text Available This paper proposes a note-based music language model (MLM for improving note-level polyphonic piano transcription. The MLM is based on the recurrent structure, which could model the temporal correlations between notes in music sequences. To combine the outputs of the note-based MLM and acoustic model directly, an integrated architecture is adopted in this paper. We also propose an inference algorithm, in which the note-based MLM is used to predict notes at the blank onsets in the thresholding transcription results. The experimental results show that the proposed inference algorithm improves the performance of note-level transcription. We also observe that the combination of the restricted Boltzmann machine (RBM and recurrent structure outperforms a single recurrent neural network (RNN or long short-term memory network (LSTM in modeling the high-dimensional note sequences. Among all the MLMs, LSTM-RBM helps the system yield the best results on all evaluation metrics regardless of the performance of acoustic models.

  9. Physics-based models of the plasmasphere

    Energy Technology Data Exchange (ETDEWEB)

    Jordanova, Vania K [Los Alamos National Laboratory; Pierrard, Vivane [BELGIUM; Goldstein, Jerry [SWRI; Andr' e, Nicolas [ESTEC/ESA; Kotova, Galina A [SRI, RUSSIA; Lemaire, Joseph F [BELGIUM; Liemohn, Mike W [U OF MICHIGAN; Matsui, H [UNIV OF NEW HAMPSHIRE

    2008-01-01

    We describe recent progress in physics-based models of the plasmasphere using the Auid and the kinetic approaches. Global modeling of the dynamics and inAuence of the plasmasphere is presented. Results from global plasmasphere simulations are used to understand and quantify (i) the electric potential pattern and evolution during geomagnetic storms, and (ii) the inAuence of the plasmasphere on the excitation of electromagnetic ion cyclotron (ElvIIC) waves a.nd precipitation of energetic ions in the inner magnetosphere. The interactions of the plasmasphere with the ionosphere a.nd the other regions of the magnetosphere are pointed out. We show the results of simulations for the formation of the plasmapause and discuss the inAuence of plasmaspheric wind and of ultra low frequency (ULF) waves for transport of plasmaspheric material. Theoretical formulations used to model the electric field and plasma distribution in the plasmasphere are given. Model predictions are compared to recent CLUSTER and MAGE observations, but also to results of earlier models and satellite observations.

  10. Structuring Qualitative Data for Agent-Based Modelling

    NARCIS (Netherlands)

    Ghorbani, Amineh; Dijkema, Gerard P.J.; Schrauwen, Noortje

    2015-01-01

    Using ethnography to build agent-based models may result in more empirically grounded simulations. Our study on innovation practice and culture in the Westland horticulture sector served to explore what information and data from ethnographic analysis could be used in models and how. MAIA, a

  11. Health effects models for nuclear power plant accident consequence analysis. Modification of models resulting from addition of effects of exposure to alpha-emitting radionuclides: Revision 1, Part 2, Scientific bases for health effects models, Addendum 2

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamson, S. [Wisconsin Univ., Madison, WI (United States); Bender, M.A. [Brookhaven National Lab., Upton, NY (United States); Boecker, B.B.; Scott, B.R. [Lovelace Biomedical and Environmental Research Inst., Albuquerque, NM (United States). Inhalation Toxicology Research Inst.; Gilbert, E.S. [Pacific Northwest Lab., Richland, WA (United States)

    1993-05-01

    The Nuclear Regulatory Commission (NRC) has sponsored several studies to identify and quantify, through the use of models, the potential health effects of accidental releases of radionuclides from nuclear power plants. The Reactor Safety Study provided the basis for most of the earlier estimates related to these health effects. Subsequent efforts by NRC-supported groups resulted in improved health effects models that were published in the report entitled {open_quotes}Health Effects Models for Nuclear Power Plant Consequence Analysis{close_quotes}, NUREG/CR-4214, 1985 and revised further in the 1989 report NUREG/CR-4214, Rev. 1, Part 2. The health effects models presented in the 1989 NUREG/CR-4214 report were developed for exposure to low-linear energy transfer (LET) (beta and gamma) radiation based on the best scientific information available at that time. Since the 1989 report was published, two addenda to that report have been prepared to (1) incorporate other scientific information related to low-LET health effects models and (2) extend the models to consider the possible health consequences of the addition of alpha-emitting radionuclides to the exposure source term. The first addendum report, entitled {open_quotes}Health Effects Models for Nuclear Power Plant Accident Consequence Analysis, Modifications of Models Resulting from Recent Reports on Health Effects of Ionizing Radiation, Low LET Radiation, Part 2: Scientific Bases for Health Effects Models,{close_quotes} was published in 1991 as NUREG/CR-4214, Rev. 1, Part 2, Addendum 1. This second addendum addresses the possibility that some fraction of the accident source term from an operating nuclear power plant comprises alpha-emitting radionuclides. Consideration of chronic high-LET exposure from alpha radiation as well as acute and chronic exposure to low-LET beta and gamma radiations is a reasonable extension of the health effects model.

  12. A receptor model for urban aerosols based on oblique factor analysis

    DEFF Research Database (Denmark)

    Keiding, Kristian; Sørensen, Morten S.; Pind, Niels

    1987-01-01

    A procedure is outlined for the construction of receptor models of urban aerosols, based on factor analysis. The advantage of the procedure is that the covariation of source impacts is included in the construction of the models. The results are compared with results obtained by other receptor......-modelling procedures. It was found that procedures based on correlating sources were physically sound as well as in mutual agreement. Procedures based on non-correlating sources were found to generate physically obscure models....

  13. New global ICT-based business models

    DEFF Research Database (Denmark)

    The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative ...... The NEWGIBM Cases Show? The Strategy Concept in Light of the Increased Importance of Innovative Business Models Successful Implementation of Global BM Innovation Globalisation Of ICT Based Business Models: Today And In 2020......The New Global Business model (NEWGIBM) book describes the background, theory references, case studies, results and learning imparted by the NEWGIBM project, which is supported by ICT, to a research group during the period from 2005-2011. The book is a result of the efforts and the collaborative....... The NEWGIBM book serves as a part of the final evaluation and documentation of the NEWGIBM project and is supported by results from the following projects: M-commerce, Global Innovation, Global Ebusiness & M-commerce, The Blue Ocean project, International Center for Innovation and Women in Business, NEFFICS...

  14. Individual-based modelling and control of bovine brucellosis

    Science.gov (United States)

    Nepomuceno, Erivelton G.; Barbosa, Alípio M.; Silva, Marcos X.; Perc, Matjaž

    2018-05-01

    We present a theoretical approach to control bovine brucellosis. We have used individual-based modelling, which is a network-type alternative to compartmental models. Our model thus considers heterogeneous populations, and spatial aspects such as migration among herds and control actions described as pulse interventions are also easily implemented. We show that individual-based modelling reproduces the mean field behaviour of an equivalent compartmental model. Details of this process, as well as flowcharts, are provided to facilitate the reproduction of the presented results. We further investigate three numerical examples using real parameters of herds in the São Paulo state of Brazil, in scenarios which explore eradication, continuous and pulsed vaccination and meta-population effects. The obtained results are in good agreement with the expected behaviour of this disease, which ultimately showcases the effectiveness of our theory.

  15. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Cloud model construct for transaction-based cooperative systems ...

    African Journals Online (AJOL)

    Cloud model construct for transaction-based cooperative systems. ... procure cutting edge Information Technology infrastructure are some of the problems faced ... Results also reveal that credit cooperatives will benefit from the model by taking ...

  17. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  18. SR-Site groundwater flow modelling methodology, setup and results

    International Nuclear Information System (INIS)

    Selroos, Jan-Olof; Follin, Sven

    2010-12-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report

  19. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  20. Modeling of driver's collision avoidance maneuver based on controller switching model.

    Science.gov (United States)

    Kim, Jong-Hae; Hayakawa, Soichiro; Suzuki, Tatsuya; Hayashi, Koji; Okuma, Shigeru; Tsuchida, Nuio; Shimizu, Masayuki; Kido, Shigeyuki

    2005-12-01

    This paper presents a modeling strategy of human driving behavior based on the controller switching model focusing on the driver's collision avoidance maneuver. The driving data are collected by using the three-dimensional (3-D) driving simulator based on the CAVE Automatic Virtual Environment (CAVE), which provides stereoscopic immersive virtual environment. In our modeling, the control scenario of the human driver, that is, the mapping from the driver's sensory information to the operation of the driver such as acceleration, braking, and steering, is expressed by Piecewise Polynomial (PWP) model. Since the PWP model includes both continuous behaviors given by polynomials and discrete logical conditions, it can be regarded as a class of Hybrid Dynamical System (HDS). The identification problem for the PWP model is formulated as the Mixed Integer Linear Programming (MILP) by transforming the switching conditions into binary variables. From the obtained results, it is found that the driver appropriately switches the "control law" according to the sensory information. In addition, the driving characteristics of the beginner driver and the expert driver are compared and discussed. These results enable us to capture not only the physical meaning of the driving skill but the decision-making aspect (switching conditions) in the driver's collision avoidance maneuver as well.

  1. Complexity and agent-based modelling in urban research

    DEFF Research Database (Denmark)

    Fertner, Christian

    influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...

  2. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  3. Feedback loops and temporal misalignment in component-based hydrologic modeling

    Science.gov (United States)

    Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.

    2011-12-01

    In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.

  4. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    Science.gov (United States)

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  6. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  7. Simulation-Based Internal Models for Safer Robots

    Directory of Open Access Journals (Sweden)

    Christian Blum

    2018-01-01

    Full Text Available In this paper, we explore the potential of mobile robots with simulation-based internal models for safety in highly dynamic environments. We propose a robot with a simulation of itself, other dynamic actors and its environment, inside itself. Operating in real time, this simulation-based internal model is able to look ahead and predict the consequences of both the robot’s own actions and those of the other dynamic actors in its vicinity. Hence, the robot continuously modifies its own actions in order to actively maintain its own safety while also achieving its goal. Inspired by the problem of how mobile robots could move quickly and safely through crowds of moving humans, we present experimental results which compare the performance of our internal simulation-based controller with a purely reactive approach as a proof-of-concept study for the practical use of simulation-based internal models.

  8. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    Science.gov (United States)

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  9. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    Science.gov (United States)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  10. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  11. Thermodynamics-based models of transcriptional regulation with gene sequence.

    Science.gov (United States)

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  12. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  13. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  14. Promoting Model-based Definition to Establish a Complete Product Definition.

    Science.gov (United States)

    Ruemler, Shawn P; Zimmerman, Kyle E; Hartman, Nathan W; Hedberg, Thomas; Feeny, Allison Barnard

    2017-05-01

    The manufacturing industry is evolving and starting to use 3D models as the central knowledge artifact for product data and product definition, or what is known as Model-based Definition (MBD). The Model-based Enterprise (MBE) uses MBD as a way to transition away from using traditional paper-based drawings and documentation. As MBD grows in popularity, it is imperative to understand what information is needed in the transition from drawings to models so that models represent all the relevant information needed for processes to continue efficiently. Finding this information can help define what data is common amongst different models in different stages of the lifecycle, which could help establish a Common Information Model. The Common Information Model is a source that contains common information from domain specific elements amongst different aspects of the lifecycle. To help establish this Common Information Model, information about how models are used in industry within different workflows needs to be understood. To retrieve this information, a survey mechanism was administered to industry professionals from various sectors. Based on the results of the survey a Common Information Model could not be established. However, the results gave great insight that will help in further investigation of the Common Information Model.

  15. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    Science.gov (United States)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  16. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...

  17. An Analysis of Turkey's PISA 2015 Results Using Two-Level Hierarchical Linear Modelling

    Science.gov (United States)

    Atas, Dogu; Karadag, Özge

    2017-01-01

    In the field of education, most of the data collected are multi-level structured. Cities, city based schools, school based classes and finally students in the classrooms constitute a hierarchical structure. Hierarchical linear models give more accurate results compared to standard models when the data set has a structure going far as individuals,…

  18. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  19. Néron models and base change

    CERN Document Server

    Halle, Lars Halvard

    2016-01-01

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions of abelian varieties. The final chapter contains a list of challenging open questions. This book is a...

  20. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  1. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  2. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  4. Guide to APA-Based Models

    Science.gov (United States)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  5. Research on light rail electric load forecasting based on ARMA model

    Science.gov (United States)

    Huang, Yifan

    2018-04-01

    The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.

  6. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  7. The ITER magnets: Preparation for full size construction based on the results of the model coil programme

    International Nuclear Information System (INIS)

    Huguet, M.

    2003-01-01

    The ITER magnets are long-lead time items and the preparation of their construction is the subject of a major and coordinated effort of the ITER International Team and Participant Teams. The results of the ITER model coil programme constitute the basis and the main source of data for the preparation of the technical specifications for the procurement of the ITER magnets. A review of the salient results of the ITER model coil programme is given and the significance of these results for the preparation of full size industrial production is explained. The model coil programme has confirmed the validity of the design and the manufacturer's ability to produce the coils with the required quality level. The programme has also allowed the optimisation of the conductor design and the identification of further development which would lead to cost reductions of the toroidal field coil case. (author)

  8. Modeling of Lightning Strokes Using Two-Peaked Channel-Base Currents

    Directory of Open Access Journals (Sweden)

    V. Javor

    2012-01-01

    Full Text Available Lightning electromagnetic field is obtained by using “engineering” models of lightning return strokes and new channel-base current functions and the results are presented in this paper. Experimentally measured channel-base currents are approximated not only with functions having two-peaked waveshapes but also with the one-peaked function so as usually used in the literature. These functions are simple to be applied in any “engineering” or electromagnetic model as well. For the three “engineering” models: transmission line model (without the peak current decay, transmission line model with linear decay, and transmission line model with exponential decay with height, the comparison of electric and magnetic field components at different distances from the lightning channel-base is presented in the case of a perfectly conducting ground. Different heights of lightning channels are also considered. These results enable analysis of advantages/shortages of the used return stroke models according to the electromagnetic field features to be achieved, as obtained by measurements.

  9. Understanding Group/Party Affiliation Using Social Networks and Agent-Based Modeling

    Science.gov (United States)

    Campbell, Kenyth

    2012-01-01

    The dynamics of group affiliation and group dispersion is a concept that is most often studied in order for political candidates to better understand the most efficient way to conduct their campaigns. While political campaigning in the United States is a very hot topic that most politicians analyze and study, the concept of group/party affiliation presents its own area of study that producers very interesting results. One tool for examining party affiliation on a large scale is agent-based modeling (ABM), a paradigm in the modeling and simulation (M&S) field perfectly suited for aggregating individual behaviors to observe large swaths of a population. For this study agent based modeling was used in order to look at a community of agents and determine what factors can affect the group/party affiliation patterns that are present. In the agent-based model that was used for this experiment many factors were present but two main factors were used to determine the results. The results of this study show that it is possible to use agent-based modeling to explore group/party affiliation and construct a model that can mimic real world events. More importantly, the model in the study allows for the results found in a smaller community to be translated into larger experiments to determine if the results will remain present on a much larger scale.

  10. Population Physiologically-Based Pharmacokinetic Modeling for the Human Lactational Transfer of PCB 153 with Consideration of Worldwide Human Biomonitoring Results

    Energy Technology Data Exchange (ETDEWEB)

    Redding, Laurel E.; Sohn, Michael D.; McKone, Thomas E.; Wang, Shu-Li; Hsieh, Dennis P. H.; Yang, Raymond S. H.

    2008-03-01

    We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and compare predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The model successfully describes the range of possible PCB 153 dispositions in maternal milk, suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.

  11. Assessing flood risk at the global scale: model setup, results, and sensitivity

    International Nuclear Information System (INIS)

    Ward, Philip J; Jongman, Brenden; Weiland, Frederiek Sperna; Winsemius, Hessel C; Bouwman, Arno; Ligtvoet, Willem; Van Beek, Rens; Bierkens, Marc F P

    2013-01-01

    Globally, economic losses from flooding exceeded $19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP ($1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures. (letter)

  12. Power-Based Setpoint Control : Experimental Results on a Planar Manipulator

    NARCIS (Netherlands)

    Dirksz, D. A.; Scherpen, J. M. A.

    In the last years the power-based modeling framework, developed in the sixties to model nonlinear electrical RLC networks, has been extended for modeling and control of a larger class of physical systems. In this brief we apply power-based integral control to a planar manipulator experimental setup.

  13. Model-predictive control based on Takagi-Sugeno fuzzy model for electrical vehicles delayed model

    DEFF Research Database (Denmark)

    Khooban, Mohammad-Hassan; Vafamand, Navid; Niknam, Taher

    2017-01-01

    Electric vehicles (EVs) play a significant role in different applications, such as commuter vehicles and short distance transport applications. This study presents a new structure of model-predictive control based on the Takagi-Sugeno fuzzy model, linear matrix inequalities, and a non......-quadratic Lyapunov function for the speed control of EVs including time-delay states and parameter uncertainty. Experimental data, using the Federal Test Procedure (FTP-75), is applied to test the performance and robustness of the suggested controller in the presence of time-varying parameters. Besides, a comparison...... is made between the results of the suggested robust strategy and those obtained from some of the most recent studies on the same topic, to assess the efficiency of the suggested controller. Finally, the experimental results based on a TMS320F28335 DSP are performed on a direct current motor. Simulation...

  14. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  15. Evaluating Water Demand Using Agent-Based Modeling

    Science.gov (United States)

    Lowry, T. S.

    2004-12-01

    based on its own condition and the condition of the world around it. For example, residential agents can make decisions to convert to or from xeriscaping and/or low-flow appliances based on policy implementation, economic status, weather, and climatic conditions. Agricultural agents may vary their usage by making decisions on crop distribution and irrigation design. Preliminary results show that water usage can be highly irrational under certain conditions. Results also identify sub-sectors within each group that have the highest influence on ensemble group behavior, providing a means for policy makers to target their efforts. Finally, the model is able to predict the impact of low-probability, high-impact events such as catastrophic denial of service due to natural and/or man-made events.

  16. Process-Based Modeling of Constructed Wetlands

    Science.gov (United States)

    Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.

    2007-12-01

    Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated

  17. On Process Modelling Using Physical Oriented And Phenomena Based Principles

    Directory of Open Access Journals (Sweden)

    Mihai Culea

    2000-12-01

    Full Text Available This work presents a modelling framework based on phenomena description of the process. The approach is taken to easy understand and construct process model in heterogeneous possible distributed modelling and simulation environments. A simplified case study of a heat exchanger is considered and Modelica modelling language to check the proposed concept. The partial results are promising and the research effort will be extended in a computer aided modelling environment based on phenomena.

  18. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  19. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...

  20. Lévy-based growth models

    DEFF Research Database (Denmark)

    Jónsdóttir, Kristjana Ýr; Schmiegel, Jürgen; Jensen, Eva Bjørn Vedel

    2008-01-01

    In the present paper, we give a condensed review, for the nonspecialist reader, of a new modelling framework for spatio-temporal processes, based on Lévy theory. We show the potential of the approach in stochastic geometry and spatial statistics by studying Lévy-based growth modelling of planar o...... objects. The growth models considered are spatio-temporal stochastic processes on the circle. As a by product, flexible new models for space–time covariance functions on the circle are provided. An application of the Lévy-based growth models to tumour growth is discussed....

  1. Determination of High-Frequency Current Distribution Using EMTP-Based Transmission Line Models with Resulting Radiated Electromagnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mork, B; Nelson, R; Kirkendall, B; Stenvig, N

    2009-11-30

    Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

  2. Semi-active control of magnetorheological elastomer base isolation system utilising learning-based inverse model

    Science.gov (United States)

    Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng

    2017-10-01

    Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.

  3. Improving the natural gas transporting based on the steady state simulation results

    International Nuclear Information System (INIS)

    Szoplik, Jolanta

    2016-01-01

    The work presents an example of practical application of gas flow modeling results in the network, that was obtained for the existing gas network and for real data about network load depending on the time of day and air temperature. The gas network load in network connections was estimated based on real data concerning gas consumption by customers and weather data in 2010, based on two-parametric model based on the number of degree-days of heating. The aim of this study was to elaborate a relationship between pressure and gas stream introduced into the gas network. It was demonstrated that practical application of elaborated relationship in gas reduction station allows for the automatic adjustment of gas pressure in the network to the volume of network load and maintenance of gas pressure in the whole network at possibly the lowest level. It was concluded based on the results obtained that such an approach allows to reduce the amount of gas supplied to the network by 0.4% of the annual network load. - Highlights: • Determination of the hourly nodal demand for gas by the consumers. • Analysis of the results of gas flow simulation in pipeline network. • Elaboration of the relationship between gas pressure and gas stream feeding the network. • Automatic gas pressure steering in the network depending on the network load. • Comparison of input gas pressure in the system without and with pressure steering.

  4. Model-Based Development of Control Systems for Forestry Cranes

    Directory of Open Access Journals (Sweden)

    Pedro La Hera

    2015-01-01

    Full Text Available Model-based methods are used in industry for prototyping concepts based on mathematical models. With our forest industry partners, we have established a model-based workflow for rapid development of motion control systems for forestry cranes. Applying this working method, we can verify control algorithms, both theoretically and practically. This paper is an example of this workflow and presents four topics related to the application of nonlinear control theory. The first topic presents the system of differential equations describing the motion dynamics. The second topic presents nonlinear control laws formulated according to sliding mode control theory. The third topic presents a procedure for model calibration and control tuning that are a prerequisite to realize experimental tests. The fourth topic presents the results of tests performed on an experimental crane specifically equipped for these tasks. Results of these studies show the advantages and disadvantages of these control algorithms, and they highlight their performance in terms of robustness and smoothness.

  5. Embracing model-based designs for dose-finding trials.

    Science.gov (United States)

    Love, Sharon B; Brown, Sarah; Weir, Christopher J; Harbron, Chris; Yap, Christina; Gaschler-Markefski, Birgit; Matcham, James; Caffrey, Louise; McKevitt, Christopher; Clive, Sally; Craddock, Charlie; Spicer, James; Cornelius, Victoria

    2017-07-25

    Dose-finding trials are essential to drug development as they establish recommended doses for later-phase testing. We aim to motivate wider use of model-based designs for dose finding, such as the continual reassessment method (CRM). We carried out a literature review of dose-finding designs and conducted a survey to identify perceived barriers to their implementation. We describe the benefits of model-based designs (flexibility, superior operating characteristics, extended scope), their current uptake, and existing resources. The most prominent barriers to implementation of a model-based design were lack of suitable training, chief investigators' preference for algorithm-based designs (e.g., 3+3), and limited resources for study design before funding. We use a real-world example to illustrate how these barriers can be overcome. There is overwhelming evidence for the benefits of CRM. Many leading pharmaceutical companies routinely implement model-based designs. Our analysis identified barriers for academic statisticians and clinical academics in mirroring the progress industry has made in trial design. Unified support from funders, regulators, and journal editors could result in more accurate doses for later-phase testing, and increase the efficiency and success of clinical drug development. We give recommendations for increasing the uptake of model-based designs for dose-finding trials in academia.

  6. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  7. Linkage of PRA models. Phase 1, Results

    International Nuclear Information System (INIS)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute

  8. On the upscaling of process-based models in deltaic applications

    Science.gov (United States)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  9. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  10. Applying 3-PG, a simple process-based model designed to produce practical results, to data from loblolly pine experiments

    Science.gov (United States)

    Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand

    2001-01-01

    3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...

  11. Bayesian based Diagnostic Model for Condition based Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    Operation and maintenance costs are a major contributor to the Levelized Cost of Energy for electricity produced by offshore wind and can be significantly reduced if existing corrective actions are performed as efficiently as possible and if future corrective actions are avoided by performing...... sufficient preventive actions. This paper presents an applied and generic diagnostic model for fault detection and condition based maintenance of offshore wind components. The diagnostic model is based on two probabilistic matrices; first, a confidence matrix, representing the probability of detection using...... for a wind turbine component based on vibration, temperature, and oil particle fault detection methods. The last part of the paper will have a discussion of the case study results and present conclusions....

  12. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  13. Agent Based Modelling for Social Simulation

    NARCIS (Netherlands)

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course

  14. The Research of Clinical Decision Support System Based on Three-Layer Knowledge Base Model

    Directory of Open Access Journals (Sweden)

    Yicheng Jiang

    2017-01-01

    Full Text Available In many clinical decision support systems, a two-layer knowledge base model (disease-symptom of rule reasoning is used. This model often does not express knowledge very well since it simply infers disease from the presence of certain symptoms. In this study, we propose a three-layer knowledge base model (disease-symptom-property to utilize more useful information in inference. The system iteratively calculates the probability of patients who may suffer from diseases based on a multisymptom naive Bayes algorithm, in which the specificity of these disease symptoms is weighted by the estimation of the degree of contribution to diagnose the disease. It significantly reduces the dependencies between attributes to apply the naive Bayes algorithm more properly. Then, the online learning process for parameter optimization of the inference engine was completed. At last, our decision support system utilizing the three-layer model was formally evaluated by two experienced doctors. By comparisons between prediction results and clinical results, our system can provide effective clinical recommendations to doctors. Moreover, we found that the three-layer model can improve the accuracy of predictions compared with the two-layer model. In light of some of the limitations of this study, we also identify and discuss several areas that need continued improvement.

  15. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  16. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  17. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  18. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  19. The impact of design-based modeling instruction on seventh graders' spatial abilities and model-based argumentation

    Science.gov (United States)

    McConnell, William J.

    Due to the call of current science education reform for the integration of engineering practices within science classrooms, design-based instruction is receiving much attention in science education literature. Although some aspect of modeling is often included in well-known design-based instructional methods, it is not always a primary focus. The purpose of this study was to better understand how design-based instruction with an emphasis on scientific modeling might impact students' spatial abilities and their model-based argumentation abilities. In the following mixed-method multiple case study, seven seventh grade students attending a secular private school in the Mid-Atlantic region of the United States underwent an instructional intervention involving design-based instruction, modeling and argumentation. Through the course of a lesson involving students in exploring the interrelatedness of the environment and an animal's form and function, students created and used multiple forms of expressed models to assist them in model-based scientific argument. Pre/post data were collected through the use of The Purdue Spatial Visualization Test: Rotation, the Mental Rotation Test and interviews. Other data included a spatial activities survey, student artifacts in the form of models, notes, exit tickets, and video recordings of students throughout the intervention. Spatial abilities tests were analyzed using descriptive statistics while students' arguments were analyzed using the Instrument for the Analysis of Scientific Curricular Arguments and a behavior protocol. Models were analyzed using content analysis and interviews and all other data were coded and analyzed for emergent themes. Findings in the area of spatial abilities included increases in spatial reasoning for six out of seven participants, and an immense difference in the spatial challenges encountered by students when using CAD software instead of paper drawings to create models. Students perceived 3D printed

  20. Surrogate-Based Optimization of Biogeochemical Transport Models

    Science.gov (United States)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  1. Symbolic Processing Combined with Model-Based Reasoning

    Science.gov (United States)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  2. Model-based quality assessment and base-calling for second-generation sequencing data.

    Science.gov (United States)

    Bravo, Héctor Corrada; Irizarry, Rafael A

    2010-09-01

    Second-generation sequencing (sec-gen) technology can sequence millions of short fragments of DNA in parallel, making it capable of assembling complex genomes for a small fraction of the price and time of previous technologies. In fact, a recently formed international consortium, the 1000 Genomes Project, plans to fully sequence the genomes of approximately 1200 people. The prospect of comparative analysis at the sequence level of a large number of samples across multiple populations may be achieved within the next five years. These data present unprecedented challenges in statistical analysis. For instance, analysis operates on millions of short nucleotide sequences, or reads-strings of A,C,G, or T's, between 30 and 100 characters long-which are the result of complex processing of noisy continuous fluorescence intensity measurements known as base-calling. The complexity of the base-calling discretization process results in reads of widely varying quality within and across sequence samples. This variation in processing quality results in infrequent but systematic errors that we have found to mislead downstream analysis of the discretized sequence read data. For instance, a central goal of the 1000 Genomes Project is to quantify across-sample variation at the single nucleotide level. At this resolution, small error rates in sequencing prove significant, especially for rare variants. Sec-gen sequencing is a relatively new technology for which potential biases and sources of obscuring variation are not yet fully understood. Therefore, modeling and quantifying the uncertainty inherent in the generation of sequence reads is of utmost importance. In this article, we present a simple model to capture uncertainty arising in the base-calling procedure of the Illumina/Solexa GA platform. Model parameters have a straightforward interpretation in terms of the chemistry of base-calling allowing for informative and easily interpretable metrics that capture the variability in

  3. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  4. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

  5. A continuum based fem model for friction stir welding-model development

    Energy Technology Data Exchange (ETDEWEB)

    Buffa, G. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States) and Dipartimento di Tecnologia Meccanica, Produzione e Ingegneria Gestionale, Universita di Palermo, Viale delle Scienze, 90128 Palermo (Italy)]. E-mail: g.buffa@dtpm.unipa.it; Hua, J. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States)]. E-mail: hua.14@osu.edu; Shivpuri, R. [Ohio State University, Department of Industrial, Welding and Systems Engineering, 1971 Neil Avenue, 210 Baker Systems, Columbus, OH 43210 (United States)]. E-mail: shivpuri.1@osu.edu; Fratini, L. [Dipartimento di Tecnologia Meccanica, Produzione e Ingegneria Gestionale, Universita di Palermo, Viale delle Scienze, 90128 Palermo (Italy)]. E-mail: abaqus@dtpm.unipa.it

    2006-03-15

    Although friction stir welding (FSW) has been successfully used to join materials that are difficult-to-weld or unweldeable by fusion welding methods, it is still in its early development stage and, therefore, a scientific knowledge based predictive model is of significant help for thorough understanding of FSW process. In this paper, a continuum based FEM model for friction stir welding process is proposed, that is 3D Lagrangian implicit, coupled, rigid-viscoplastic. This model is calibrated by comparing with experimental results of force and temperature distribution, then is used to investigate the distribution of temperature and strain in heat affect zone and the weld nugget. The model correctly predicts the non-symmetric nature of FSW process, and the relationships between the tool forces and the variation in the process parameters. It is found that the effective strain distribution is non-symmetric about the weld line while the temperature profile is almost symmetric in the weld zone.

  6. A continuum based fem model for friction stir welding-model development

    International Nuclear Information System (INIS)

    Buffa, G.; Hua, J.; Shivpuri, R.; Fratini, L.

    2006-01-01

    Although friction stir welding (FSW) has been successfully used to join materials that are difficult-to-weld or unweldeable by fusion welding methods, it is still in its early development stage and, therefore, a scientific knowledge based predictive model is of significant help for thorough understanding of FSW process. In this paper, a continuum based FEM model for friction stir welding process is proposed, that is 3D Lagrangian implicit, coupled, rigid-viscoplastic. This model is calibrated by comparing with experimental results of force and temperature distribution, then is used to investigate the distribution of temperature and strain in heat affect zone and the weld nugget. The model correctly predicts the non-symmetric nature of FSW process, and the relationships between the tool forces and the variation in the process parameters. It is found that the effective strain distribution is non-symmetric about the weld line while the temperature profile is almost symmetric in the weld zone

  7. Markov chain aggregation for agent-based models

    CERN Document Server

    Banisch, Sven

    2016-01-01

    This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the upd...

  8. Reaction time for trimolecular reactions in compartment-based reaction-diffusion models

    Science.gov (United States)

    Li, Fei; Chen, Minghan; Erban, Radek; Cao, Yang

    2018-05-01

    Trimolecular reaction models are investigated in the compartment-based (lattice-based) framework for stochastic reaction-diffusion modeling. The formulae for the first collision time and the mean reaction time are derived for the case where three molecules are present in the solution under periodic boundary conditions. For the case of reflecting boundary conditions, similar formulae are obtained using a computer-assisted approach. The accuracy of these formulae is further verified through comparison with numerical results. The presented derivation is based on the first passage time analysis of Montroll [J. Math. Phys. 10, 753 (1969)]. Montroll's results for two-dimensional lattice-based random walks are adapted and applied to compartment-based models of trimolecular reactions, which are studied in one-dimensional or pseudo one-dimensional domains.

  9. Event-based Simulation Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    De Raedt, H.; Michielsen, K.; Jaeger, G; Khrennikov, A; Schlosshauer, M; Weihs, G

    2011-01-01

    We present a corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one. The event-based corpuscular model gives a unified

  10. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  11. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  12. Reactor kinetics revisited: a coefficient based model (CBM)

    International Nuclear Information System (INIS)

    Ratemi, W.M.

    2011-01-01

    In this paper, a nuclear reactor kinetics model based on Guelph expansion coefficients calculation ( Coefficients Based Model, CBM), for n groups of delayed neutrons is developed. The accompanying characteristic equation is a polynomial form of the Inhour equation with the same coefficients of the CBM- kinetics model. Those coefficients depend on Universal abc- values which are dependent on the type of the fuel fueling a nuclear reactor. Furthermore, such coefficients are linearly dependent on the inserted reactivity. In this paper, the Universal abc- values have been presented symbolically, for the first time, as well as with their numerical values for U-235 fueled reactors for one, two, three, and six groups of delayed neutrons. Simulation studies for constant and variable reactivity insertions are made for the CBM kinetics model, and a comparison of results, with numerical solutions of classical kinetics models for one, two, three, and six groups of delayed neutrons are presented. The results show good agreements, especially for single step insertion of reactivity, with the advantage of the CBM- solution of not encountering the stiffness problem accompanying the numerical solutions of the classical kinetics model. (author)

  13. Segment-based acoustic models for continuous speech recognition

    Science.gov (United States)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  14. An analytical model for backscattered luminance in fog: comparisons with Monte Carlo computations and experimental results

    International Nuclear Information System (INIS)

    Taillade, Frédéric; Dumont, Eric; Belin, Etienne

    2008-01-01

    We propose an analytical model for backscattered luminance in fog and derive an expression for the visibility signal-to-noise ratio as a function of meteorological visibility distance. The model uses single scattering processes. It is based on the Mie theory and the geometry of the optical device (emitter and receiver). In particular, we present an overlap function and take the phase function of fog into account. The results of the backscattered luminance obtained with our analytical model are compared to simulations made using the Monte Carlo method based on multiple scattering processes. An excellent agreement is found in that the discrepancy between the results is smaller than the Monte Carlo standard uncertainties. If we take no account of the geometry of the optical device, the results of the model-estimated backscattered luminance differ from the simulations by a factor 20. We also conclude that the signal-to-noise ratio computed with the Monte Carlo method and our analytical model is in good agreement with experimental results since the mean difference between the calculations and experimental measurements is smaller than the experimental uncertainty

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. DEVELOPMENT OF SCIENCE PROCESS SKILLS STUDENTS WITH PROJECT BASED LEARNING MODEL- BASED TRAINING IN LEARNING PHYSICS

    Directory of Open Access Journals (Sweden)

    Ratna Malawati

    2016-06-01

    Full Text Available This study aims to improve the physics Science Process Skills Students on cognitive and psychomotor aspects by using model based Project Based Learning training.The object of this study is the Project Based Learning model used in the learning process of Computationa Physics.The method used is classroom action research through two learning cycles, each cycle consisting of the stages of planning, implementation, observation and reflection. In the first cycle of treatment with their emphasis given training in the first phase up to third in the model Project Based Learning, while the second cycle is given additional treatment with emphasis discussion is collaboration in achieving the best results for each group of products. The results of data analysis showed increased ability to think Students on cognitive and Science Process Skills in the psychomotor.

  17. An analytical model for nanoparticles concentration resulting from infusion into poroelastic brain tissue.

    Science.gov (United States)

    Pizzichelli, G; Di Michele, F; Sinibaldi, E

    2016-02-01

    We consider the infusion of a diluted suspension of nanoparticles (NPs) into poroelastic brain tissue, in view of relevant biomedical applications such as intratumoral thermotherapy. Indeed, the high impact of the related pathologies motivates the development of advanced therapeutic approaches, whose design also benefits from theoretical models. This study provides an analytical expression for the time-dependent NPs concentration during the infusion into poroelastic brain tissue, which also accounts for particle binding onto cells (by recalling relevant results from the colloid filtration theory). Our model is computationally inexpensive and, compared to fully numerical approaches, permits to explicitly elucidate the role of the involved physical aspects (tissue poroelasticity, infusion parameters, NPs physico-chemical properties, NP-tissue interactions underlying binding). We also present illustrative results based on parameters taken from the literature, by considering clinically relevant ranges for the infusion parameters. Moreover, we thoroughly assess the model working assumptions besides discussing its limitations. While not laying any claims of generality, our model can be used to support the development of more ambitious numerical approaches, towards the preliminary design of novel therapies based on NPs infusion into brain tissue. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  19. Model-based safety analysis of a control system using Simulink and Simscape extended models

    Directory of Open Access Journals (Sweden)

    Shao Nian

    2017-01-01

    Full Text Available The aircraft or system safety assessment process is an integral part of the overall aircraft development cycle. It is usually characterized by a very high timely and financial effort and can become a critical design driver in certain cases. Therefore, an increasing demand of effective methods to assist the safety assessment process arises within the aerospace community. One approach is the utilization of model-based technology, which is already well-established in the system development, for safety assessment purposes. This paper mainly describes a new tool for Model-Based Safety Analysis. A formal model for an example system is generated and enriched with extended models. Then, system safety analyses are performed on the model with the assistance of automation tools and compared to the results of a manual analysis. The objective of this paper is to improve the increasingly complex aircraft systems development process. This paper develops a new model-based analysis tool in Simulink/Simscape environment.

  20. Solar Deployment System (SolarDS) Model: Documentation and Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Denholm, P.; Drury, E.; Margolis, R.

    2009-09-01

    The Solar Deployment System (SolarDS) model is a bottom-up, market penetration model that simulates the potential adoption of photovoltaics (PV) on residential and commercial rooftops in the continental United States through 2030. NREL developed SolarDS to examine the market competitiveness of PV based on regional solar resources, capital costs, electricity prices, utility rate structures, and federal and local incentives. The model uses the projected financial performance of PV systems to simulate PV adoption for building types and regions then aggregates adoption to state and national levels. The main components of SolarDS include a PV performance simulator, a PV annual revenue calculator, a PV financial performance calculator, a PV market share calculator, and a regional aggregator. The model simulates a variety of installed PV capacity for a range of user-specified input parameters. PV market penetration levels from 15 to 193 GW by 2030 were simulated in preliminary model runs. SolarDS results are primarily driven by three model assumptions: (1) future PV cost reductions, (2) the maximum PV market share assumed for systems with given financial performance, and (3) PV financing parameters and policy-driven assumptions, such as the possible future cost of carbon emissions.

  1. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  2. Event-Based Corpuscular Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    Michielsen, K.; Jin, F.; Raedt, H. De

    A corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one is presented. The event-based corpuscular model is shown to give a

  3. PARTICIPATION BASED MODEL OF SHIP CREW MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Toni Bielić

    2014-10-01

    Full Text Available 800x600 This paper analyse the participation - based model on board the ship as possibly optimal leadership model existing in the shipping industry with accent on decision - making process. In the paper authors have tried to define master’s behaviour model and management style identifying drawbacks and disadvantages of vertical, pyramidal organization with master on the top. Paper describes efficiency of decision making within team organization and optimization of a ship’s organisation by introducing teamwork on board the ship. Three examples of the ship’s accidents are studied and evaluated through “Leader - participation” model. The model of participation based management as a model of the teamwork has been applied in studying the cause - and - effect of accidents with the critical review of the communication and managing the human resources on a ship. The results have showed that the cause of all three accidents is the autocratic behaviour of the leaders and lack of communication within teams. Normal 0 21 false false false HR X-NONE X-NONE MicrosoftInternetExplorer4

  4. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gotseff, Peter [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear sky model performance.

  5. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  6. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  7. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  8. Hybrid Modeling Method for a DEP Based Particle Manipulation

    Directory of Open Access Journals (Sweden)

    Mohamad Sawan

    2013-01-01

    Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  9. Consentaneous agent-based and stochastic model of the financial markets.

    Science.gov (United States)

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.

  10. The design, results and future development of the National Energy Strategy Environmental Analysis Model (NESEAM)

    International Nuclear Information System (INIS)

    Fisher, R.E.; Boyd, G.A.; Breed, W.S.

    1991-01-01

    The National Energy Strategy Environmental Model (NESEAM) has been developed to project emissions for the National Energy Strategy (NES). Two scenarios were evaluated for the NES, a Current Policy Base Case and a NES Action Case. The results from the NES Actions Case project much lower emissions than the Current Policy Base Case. Future enhancements to NESEAM will focus on fuel cycle analysis, including future technologies and additional pollutants to model. NESEAM's flexibility will allow it to model other future legislative issues. 7 refs., 4 figs., 2 tabs

  11. Exact results for survival probability in the multistate Landau-Zener model

    International Nuclear Information System (INIS)

    Volkov, M V; Ostrovsky, V N

    2004-01-01

    An exact formula is derived for survival probability in the multistate Landau-Zener model in the special case where the initially populated state corresponds to the extremal (maximum or minimum) slope of a linear diabatic potential curve. The formula was originally guessed by S Brundobler and V Elzer (1993 J. Phys. A: Math. Gen. 26 1211) based on numerical calculations. It is a simple generalization of the expression for the probability of diabatic passage in the famous two-state Landau-Zener model. Our result is obtained via analysis and summation of the entire perturbation theory series

  12. Linear regression metamodeling as a tool to summarize and present simulation model results.

    Science.gov (United States)

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  13. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  14. Landscape Epidemiology Modeling Using an Agent-Based Model and a Geographic Information System

    Directory of Open Access Journals (Sweden)

    S. M. Niaz Arifin

    2015-05-01

    Full Text Available A landscape epidemiology modeling framework is presented which integrates the simulation outputs from an established spatial agent-based model (ABM of malaria with a geographic information system (GIS. For a study area in Kenya, five landscape scenarios are constructed with varying coverage levels of two mosquito-control interventions. For each scenario, maps are presented to show the average distributions of three output indices obtained from the results of 750 simulation runs. Hot spot analysis is performed to detect statistically significant hot spots and cold spots. Additional spatial analysis is conducted using ordinary kriging with circular semivariograms for all scenarios. The integration of epidemiological simulation-based results with spatial analyses techniques within a single modeling framework can be a valuable tool for conducting a variety of disease control activities such as exploring new biological insights, monitoring epidemiological landscape changes, and guiding resource allocation for further investigation.

  15. A Causal Model of Consumer-Based Brand Equity

    Directory of Open Access Journals (Sweden)

    Szőcs Attila

    2015-12-01

    Full Text Available Branding literature suggests that consumer-based brand equity (CBBE is a multidimensional construct. Starting from this approach and developing a conceptual multidimensional model, this study finds that CBBE can be best modelled with a two-dimensional structure and claims that it achieves this result by choosing the theoretically based causal specification. On the contrary, with reflective specification, one will be able to fit almost any valid construct because of the halo effect and common method bias. In the final model, Trust (in quality and Advantage are causing the second-order Brand Equity. The two-dimensional brand equity is an intuitive model easy to interpret and easy to measure, which thus may be a much more attractive means for the management as well.

  16. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Science.gov (United States)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  17. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1996-01-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)

  18. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  19. Bayesian Based Diagnostic Model for Condition Based Maintenance of Offshore Wind Farms

    Directory of Open Access Journals (Sweden)

    Masoud Asgarpour

    2018-01-01

    Full Text Available Operation and maintenance costs are a major contributor to the Levelized Cost of Energy for electricity produced by offshore wind and can be significantly reduced if existing corrective actions are performed as efficiently as possible and if future corrective actions are avoided by performing sufficient preventive actions. This paper presents an applied and generic diagnostic model for fault detection and condition based maintenance of offshore wind components. The diagnostic model is based on two probabilistic matrices; first, a confidence matrix, representing the probability of detection using each fault detection method, and second, a diagnosis matrix, representing the individual outcome of each fault detection method. Once the confidence and diagnosis matrices of a component are defined, the individual diagnoses of each fault detection method are combined into a final verdict on the fault state of that component. Furthermore, this paper introduces a Bayesian updating model based on observations collected by inspections to decrease the uncertainty of initial confidence matrix. The framework and implementation of the presented diagnostic model are further explained within a case study for a wind turbine component based on vibration, temperature, and oil particle fault detection methods. The last part of the paper will have a discussion of the case study results and present conclusions.

  20. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    Science.gov (United States)

    Bordogna, Clelia María; Albano, Ezequiel V.

    2007-02-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latané. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work.

  1. Statistical methods applied to the study of opinion formation models: a brief overview and results of a numerical study of a model based on the social impact theory

    International Nuclear Information System (INIS)

    Bordogna, Clelia Maria; Albano, Ezequiel V

    2007-01-01

    The aim of this paper is twofold. On the one hand we present a brief overview on the application of statistical physics methods to the modelling of social phenomena focusing our attention on models for opinion formation. On the other hand, we discuss and present original results of a model for opinion formation based on the social impact theory developed by Latane. The presented model accounts for the interaction among the members of a social group under the competitive influence of a strong leader and the mass media, both supporting two different states of opinion. Extensive simulations of the model are presented, showing that they led to the observation of a rich scenery of complex behaviour including, among others, critical behaviour and phase transitions between a state of opinion dominated by the leader and another dominated by the mass media. The occurrence of interesting finite-size effects reveals that, in small communities, the opinion of the leader may prevail over that of the mass media. This observation is relevant for the understanding of social phenomena involving a finite number of individuals, in contrast to actual physical phase transitions that take place in the thermodynamic limit. Finally, we give a brief outlook of open questions and lines for future work

  2. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  3. Results of the ITER toroidal field model coil project

    International Nuclear Information System (INIS)

    Salpietro, E.; Maix, R.

    2001-01-01

    In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved

  4. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  5. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  6. Adaptive MPC based on MIMO ARX-Laguerre model.

    Science.gov (United States)

    Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais

    2017-03-01

    This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  8. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1995-09-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs

  9. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  10. Multi-Site Calibration of Linear Reservoir Based Geomorphologic Rainfall-Runoff Models

    Directory of Open Access Journals (Sweden)

    Bahram Saeidifarzad

    2014-09-01

    Full Text Available Multi-site optimization of two adapted event-based geomorphologic rainfall-runoff models was presented using Non-dominated Sorting Genetic Algorithm (NSGA-II method for the South Fork Eel River watershed, California. The first model was developed based on Unequal Cascade of Reservoirs (UECR and the second model was presented as a modified version of Geomorphological Unit Hydrograph based on Nash’s model (GUHN. Two calibration strategies were considered as semi-lumped and semi-distributed for imposing (or unimposing the geomorphology relations in the models. The results of models were compared with Nash’s model. Obtained results using the observed data of two stations in the multi-site optimization framework showed reasonable efficiency values in both the calibration and the verification steps. The outcomes also showed that semi-distributed calibration of the modified GUHN model slightly outperformed other models in both upstream and downstream stations during calibration. Both calibration strategies for the developed UECR model during the verification phase showed slightly better performance in the downstream station, but in the upstream station, the modified GUHN model in the semi-lumped strategy slightly outperformed the other models. The semi-lumped calibration strategy could lead to logical lag time parameters related to the basin geomorphology and may be more suitable for data-based statistical analyses of the rainfall-runoff process.

  11. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    International Nuclear Information System (INIS)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-01

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  12. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    Science.gov (United States)

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  13. Impact of Learning Model Based on Cognitive Conflict toward Student’s Conceptual Understanding

    Science.gov (United States)

    Mufit, F.; Festiyed, F.; Fauzan, A.; Lufri, L.

    2018-04-01

    The problems that often occur in the learning of physics is a matter of misconception and low understanding of the concept. Misconceptions do not only happen to students, but also happen to college students and teachers. The existing learning model has not had much impact on improving conceptual understanding and remedial efforts of student misconception. This study aims to see the impact of cognitive-based learning model in improving conceptual understanding and remediating student misconceptions. The research method used is Design / Develop Research. The product developed is a cognitive conflict-based learning model along with its components. This article reports on product design results, validity tests, and practicality test. The study resulted in the design of cognitive conflict-based learning model with 4 learning syntaxes, namely (1) preconception activation, (2) presentation of cognitive conflict, (3) discovery of concepts & equations, (4) Reflection. The results of validity tests by some experts on aspects of content, didactic, appearance or language, indicate very valid criteria. Product trial results also show a very practical product to use. Based on pretest and posttest results, cognitive conflict-based learning models have a good impact on improving conceptual understanding and remediating misconceptions, especially in high-ability students.

  14. Particle-based model for skiing traffic.

    Science.gov (United States)

    Holleczek, Thomas; Tröster, Gerhard

    2012-05-01

    We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.

  15. Recent shell-model results for exotic nuclei

    Directory of Open Access Journals (Sweden)

    Utsuno Yusuke

    2014-03-01

    Full Text Available We report on our recent advancement in the shell model and its applications to exotic nuclei, focusing on the shell evolution and large-scale calculations with the Monte Carlo shell model (MCSM. First, we test the validity of the monopole-based universal interaction (VMU as a shell-model interaction by performing large-scale shell-model calculations in two different mass regions using effective interactions which partly comprise VMU. Those calculations are successful and provide a deeper insight into the shell evolution beyond the single-particle model, in particular showing that the evolution of the spin-orbit splitting due to the tensor force plays a decisive role in the structure of the neutron-rich N ∼ 28 region and antimony isotopes. Next, we give a brief overview of recent developments in MCSM, and show that it is applicable to exotic nuclei that involve many valence orbits. As an example of its applications to exotic nuclei, shape coexistence in 32Mg is examined.

  16. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  17. Evaluation of temperature-based global solar radiation models in China

    DEFF Research Database (Denmark)

    Liu, Xiaoying; Mei, Xurong; Li, Yuzhong

    2009-01-01

    empirical equations to estimate these parameters. Two schemes in calculating ¿T were employed: ¿T1 (based on single day Tmin) used in the Harg and ¿T2 (based on 2-day average of Tmin) used in the B-C model. Results showed that the original B-C model performed similarly to the best performing modified Harg...

  18. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  19. Model-based sensor-augmented pump therapy.

    Science.gov (United States)

    Grosman, Benyamin; Voskanyan, Gayane; Loutseiko, Mikhail; Roy, Anirban; Mehta, Aloke; Kurtz, Natalie; Parikh, Neha; Kaufman, Francine R; Mastrototaro, John J; Keenan, Barry

    2013-03-01

    In insulin pump therapy, optimization of bolus and basal insulin dose settings is a challenge. We introduce a new algorithm that provides individualized basal rates and new carbohydrate ratio and correction factor recommendations. The algorithm utilizes a mathematical model of blood glucose (BG) as a function of carbohydrate intake and delivered insulin, which includes individualized parameters derived from sensor BG and insulin delivery data downloaded from a patient's pump. A mathematical model of BG as a function of carbohydrate intake and delivered insulin was developed. The model includes fixed parameters and several individualized parameters derived from the subject's BG measurements and pump data. Performance of the new algorithm was assessed using n = 4 diabetic canine experiments over a 32 h duration. In addition, 10 in silico adults from the University of Virginia/Padova type 1 diabetes mellitus metabolic simulator were tested. The percentage of time in glucose range 80-180 mg/dl was 86%, 85%, 61%, and 30% using model-based therapy and [78%, 100%] (brackets denote multiple experiments conducted under the same therapy and animal model), [75%, 67%], 47%, and 86% for the control experiments for dogs 1 to 4, respectively. The BG measurements obtained in the simulation using our individualized algorithm were in 61-231 mg/dl min-max envelope, whereas use of the simulator's default treatment resulted in BG measurements 90-210 mg/dl min-max envelope. The study results demonstrate the potential of this method, which could serve as a platform for improving, facilitating, and standardizing insulin pump therapy based on a single download of data. © 2013 Diabetes Technology Society.

  20. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  1. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    Science.gov (United States)

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  2. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Science.gov (United States)

    Brewe, Eric; Traxler, Adrienne; de la Garza, Jorge; Kramer, Laird H.

    2013-12-01

    We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26-0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  3. Artificial Neural Network Based Model of Photovoltaic Cell

    Directory of Open Access Journals (Sweden)

    Messaouda Azzouzi

    2017-03-01

    Full Text Available This work concerns the modeling of a photovoltaic system and the prediction of the sensitivity of electrical parameters (current, power of the six types of photovoltaic cells based on voltage applied between terminals using one of the best known artificial intelligence technique which is the Artificial Neural Networks. The results of the modeling and prediction have been well shown as a function of number of iterations and using different learning algorithms to obtain the best results

  4. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2012-07-23

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types though oscillating resistance and time dependent poles are present. We have also proposed an analytical model to estimate the desired amplitude of oscillation before the oscillation starts. These Memristor-based oscillation results, presented for the first time, are in good agreement with simulation results. © 2011 Elsevier Ltd.

  5. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  6. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  7. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  8. Artificial emotional model based on finite state machine

    Institute of Scientific and Technical Information of China (English)

    MENG Qing-mei; WU Wei-guo

    2008-01-01

    According to the basic emotional theory, the artificial emotional model based on the finite state machine(FSM) was presented. In finite state machine model of emotion, the emotional space included the basic emotional space and the multiple emotional spaces. The emotion-switching diagram was defined and transition function was developed using Markov chain and linear interpolation algorithm. The simulation model was built using Stateflow toolbox and Simulink toolbox based on the Matlab platform.And the model included three subsystems: the input one, the emotion one and the behavior one. In the emotional subsystem, the responses of different personalities to the external stimuli were described by defining personal space. This model takes states from an emotional space and updates its state depending on its current state and a state of its input (also a state-emotion). The simulation model realizes the process of switching the emotion from the neutral state to other basic emotions. The simulation result is proved to correspond to emotion-switching law of human beings.

  9. Agent Based Modelling for Social Simulation

    OpenAIRE

    Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.

    2013-01-01

    This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...

  10. An Inter-Personal Information Sharing Model Based on Personalized Recommendations

    Science.gov (United States)

    Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji

    In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated

  11. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  12. Web-based reactive transport modeling using PFLOTRAN

    Science.gov (United States)

    Zhou, H.; Karra, S.; Lichtner, P. C.; Versteeg, R.; Zhang, Y.

    2017-12-01

    Actionable understanding of system behavior in the subsurface is required for a wide spectrum of societal and engineering needs by both commercial firms and government entities and academia. These needs include, for example, water resource management, precision agriculture, contaminant remediation, unconventional energy production, CO2 sequestration monitoring, and climate studies. Such understanding requires the ability to numerically model various coupled processes that occur across different temporal and spatial scales as well as multiple physical domains (reservoirs - overburden, surface-subsurface, groundwater-surface water, saturated-unsaturated zone). Currently, this ability is typically met through an in-house approach where computational resources, model expertise, and data for model parameterization are brought together to meet modeling needs. However, such an approach has multiple drawbacks which limit the application of high-end reactive transport codes such as the Department of Energy funded[?] PFLOTRAN code. In addition, while many end users have a need for the capabilities provided by high-end reactive transport codes, they do not have the expertise - nor the time required to obtain the expertise - to effectively use these codes. We have developed and are actively enhancing a cloud-based software platform through which diverse users are able to easily configure, execute, visualize, share, and interpret PFLOTRAN models. This platform consists of a web application and available on-demand HPC computational infrastructure. The web application consists of (1) a browser-based graphical user interface which allows users to configure models and visualize results interactively, and (2) a central server with back-end relational databases which hold configuration, data, modeling results, and Python scripts for model configuration, and (3) a HPC environment for on-demand model execution. We will discuss lessons learned in the development of this platform, the

  13. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation

  14. Improvement of a Robotic Manipulator Model Based on Multivariate Residual Modeling

    Directory of Open Access Journals (Sweden)

    Serge Gale

    2017-07-01

    Full Text Available A new method is presented for extending a dynamic model of a six degrees of freedom robotic manipulator. A non-linear multivariate calibration of input–output training data from several typical motion trajectories is carried out with the aim of predicting the model systematic output error at time (t + 1 from known input reference up till and including time (t. A new partial least squares regression (PLSR based method, nominal PLSR with interactions was developed and used to handle, unmodelled non-linearities. The performance of the new method is compared with least squares (LS. Different cross-validation schemes were compared in order to assess the sampling of the state space based on conventional trajectories. The method developed in the paper can be used as fault monitoring mechanism and early warning system for sensor failure. The results show that the suggested methods improves trajectory tracking performance of the robotic manipulator by extending the initial dynamic model of the manipulator.

  15. EFFECTS OF COOPERATIVE LEARNING MODEL TYPE STAD JUST-IN TIME BASED ON THE RESULTS OF LEARNING TEACHING PHYSICS COURSE IN PHYSICS SCHOOL IN PHYSICS PROGRAM FACULTY UNIMED

    Directory of Open Access Journals (Sweden)

    Teguh Febri Sudarma

    2013-06-01

    Full Text Available Research was aimed to determine: (1 Students’ learning outcomes that was taught with just in time teaching based STAD cooperative learning method and STAD cooperative learning method (2 Students’ outcomes on Physics subject that had high learning activity compared with low learning activity. The research sample was random by raffling four classes to get two classes. The first class taught with just in time teaching based STAD cooperative learning method, while the second class was taught with STAD cooperative learning method. The instrument used was conceptual understanding that had been validated with 7 essay questions. The average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,47 higher than average gain values of students learning results with STAD cooperative learning method. The high learning activity and low learning activity gave different learning results. In this case the average gain values of students learning results with just in time teaching based STAD cooperative learning method 0,48 higher than average gain values of students learning results with STAD cooperative learning method. There was interaction between learning model and learning activity to the physics learning result test in students

  16. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  17. Implicit Three-Dimensional Geo-Modelling Based on HRBF Surface

    Science.gov (United States)

    Gou, J.; Zhou, W.; Wu, L.

    2016-10-01

    Three-dimensional (3D) geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D) geological elements remains difficult and time-consuming. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF) to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata) were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. For extended applications in 3D modelling of other kinds of geo-objects, mining ore body models and urban geotechnical engineering stratum models were constructed by this method from drill-hole data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data.

  18. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  19. A knowledge representation meta-model for rule-based modelling of signalling networks

    Directory of Open Access Journals (Sweden)

    Adrien Basso-Blandin

    2016-03-01

    Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

  20. Medical applications of model-based dynamic thermography

    Science.gov (United States)

    Nowakowski, Antoni; Kaczmarek, Mariusz; Ruminski, Jacek; Hryciuk, Marcin; Renkielska, Alicja; Grudzinski, Jacek; Siebert, Janusz; Jagielak, Dariusz; Rogowski, Jan; Roszak, Krzysztof; Stojek, Wojciech

    2001-03-01

    The proposal to use active thermography in medical diagnostics is promising in some applications concerning investigation of directly accessible parts of the human body. The combination of dynamic thermograms with thermal models of investigated structures gives attractive possibility to make internal structure reconstruction basing on different thermal properties of biological tissues. Measurements of temperature distribution synchronized with external light excitation allow registration of dynamic changes of local temperature dependent on heat exchange conditions. Preliminary results of active thermography applications in medicine are discussed. For skin and under- skin tissues an equivalent thermal model may be determined. For the assumed model its effective parameters may be reconstructed basing on the results of transient thermal processes. For known thermal diffusivity and conductivity of specific tissues the local thickness of a two or three layer structure may be calculated. Results of some medical cases as well as reference data of in vivo study on animals are presented. The method was also applied to evaluate the state of the human heart during the open chest cardio-surgical interventions. Reference studies of evoked heart infarct in pigs are referred, too. We see the proposed new in medical applications technique as a promising diagnostic tool. It is a fully non-invasive, clean, handy, fast and affordable method giving not only qualitative view of investigated surfaces but also an objective quantitative measurement result, accurate enough for many applications including fast screening of affected tissues.

  1. New results in the Dual Parton Model

    International Nuclear Information System (INIS)

    Van, J.T.T.; Capella, A.

    1984-01-01

    In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions

  2. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  3. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  4. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  5. A framework for modeling scenario-based barrier island storm impacts

    Science.gov (United States)

    Mickey, Rangley; Long, Joseph W.; Dalyander, P. Soupy; Plant, Nathaniel G.; Thompson, David M.

    2018-01-01

    Methods for investigating the vulnerability of existing or proposed coastal features to storm impacts often rely on simplified parametric models or one-dimensional process-based modeling studies that focus on changes to a profile across a dune or barrier island. These simple studies tend to neglect the impacts to curvilinear or alongshore varying island planforms, influence of non-uniform nearshore hydrodynamics and sediment transport, irregular morphology of the offshore bathymetry, and impacts from low magnitude wave events (e.g. cold fronts). Presented here is a framework for simulating regionally specific, low and high magnitude scenario-based storm impacts to assess the alongshore variable vulnerabilities of a coastal feature. Storm scenarios based on historic hydrodynamic conditions were derived and simulated using the process-based morphologic evolution model XBeach. Model results show that the scenarios predicted similar patterns of erosion and overwash when compared to observed qualitative morphologic changes from recent storm events that were not included in the dataset used to build the scenarios. The framework model simulations were capable of predicting specific areas of vulnerability in the existing feature and the results illustrate how this storm vulnerability simulation framework could be used as a tool to help inform the decision-making process for scientists, engineers, and stakeholders involved in coastal zone management or restoration projects.

  6. GIS-Based Population Model Applied to Nevada Transportation Routes

    International Nuclear Information System (INIS)

    Mills, G.S.; Neuhauser, K.S.

    1999-01-01

    Recently, a model based on geographic information system (GIS) processing of US Census Block data has made high-resolution population analysis for transportation risk analysis technically and economically feasible. Population density bordering each kilometer of a route may be tabulated with specific route sections falling into each of three categories (Rural, Suburban or Urban) identified for separate risk analysis. In addition to the improvement in resolution of Urban areas along a route, the model provides a statistically-based correction to population densities in Rural and Suburban areas where Census Block dimensions may greatly exceed the 800-meter scale of interest. A semi-automated application of the GIS model to a subset of routes in Nevada (related to the Yucca Mountain project) are presented, and the results compared to previous models including a model based on published Census and other data. These comparisons demonstrate that meaningful improvement in accuracy and specificity of transportation risk analyses is dependent on correspondingly accurate and geographically-specific population density data

  7. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  8. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    Science.gov (United States)

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  9. Going Multi-viral: Synthedemic Modelling of Internet-based Spreading Phenomena

    Directory of Open Access Journals (Sweden)

    Marily Nika

    2015-02-01

    Full Text Available Epidemics of a biological and technological nature pervade modern life. For centuries, scientific research focused on biological epidemics, with simple compartmental epidemiological models emerging as the dominant explanatory paradigm. Yet there has been limited translation of this effort to explain internet-based spreading phenomena. Indeed, single-epidemic models are inadequate to explain the multimodal nature of complex phenomena. In this paper we propose a novel paradigm for modelling internet-based spreading phenomena based on the composition of multiple compartmental epidemiological models. Our approach is inspired by Fourier analysis, but rather than trigonometric wave forms, our components are compartmental epidemiological models. We show results on simulated multiple epidemic data, swine flu data and BitTorrent downloads of a popular music artist. Our technique can characterise these multimodal data sets utilising a parsimonous number of subepidemic models.

  10. System Identification Based Proxy Model of a Reservoir under Water Injection

    Directory of Open Access Journals (Sweden)

    Berihun M. Negash

    2017-01-01

    Full Text Available Simulation of numerical reservoir models with thousands and millions of grid blocks may consume a significant amount of time and effort, even when high performance processors are used. In cases where the simulation runs are required for sensitivity analysis, dynamic control, and optimization, the act needs to be repeated several times by continuously changing parameters. This makes it even more time-consuming. Currently, proxy models that are based on response surface are being used to lessen the time required for running simulations during sensitivity analysis and optimization. Proxy models are lighter mathematical models that run faster and perform in place of heavier models that require large computations. Nevertheless, to acquire data for modeling and validation and develop the proxy model itself, hundreds of simulation runs are required. In this paper, a system identification based proxy model that requires only a single simulation run and a properly designed excitation signal was proposed and evaluated using a benchmark case study. The results show that, with proper design of excitation signal and proper selection of model structure, system identification based proxy models are found to be practical and efficient alternatives for mimicking the performance of numerical reservoir models. The resulting proxy models have potential applications for dynamic well control and optimization.

  11. Developing confidence in a coupled TH model based on the results of experiment by using engineering scale test facility, 'COUPLE'

    International Nuclear Information System (INIS)

    Fujisaki, Kiyoshi; Suzuki, Hideaki; Fujita, Tomoo

    2008-03-01

    It is necessary to understand quantitative changes of near-field conditions and processes over time and space for modeling the near-field evolution after emplacement of engineered barriers. However, the coupled phenomena in near-field are complicated because thermo-, hydro-, mechanical, chemical processes will interact each other. The question is, therefore, whether the applied model will represent the coupled behavior adequately or not. In order to develop confidence in the modeling, it is necessary to compare with results of coupled behavior experiments in laboratory or in site. In this report, we evaluated the applicability of a coupled T-H model under the conditions of simulated near-field for the results of coupled T-H experiment in laboratory. As a result, it has been shown that the fitting by the modeling with the measured data is reasonable under this condition. (author)

  12. Knowledge representation to support reasoning based on multiple models

    Science.gov (United States)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  13. Cognitive Modeling for Agent-Based Simulation of Child Maltreatment

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard

    This paper extends previous work to develop cognitive modeling for agent-based simulation of child maltreatment (CM). The developed model is inspired from parental efficacy, parenting stress, and the theory of planned behavior. It provides an explanatory, process-oriented model of CM and incorporates causality relationship and feedback loops from different factors in the social ecology in order for simulating the dynamics of CM. We describe the model and present simulation results to demonstrate the features of this model.

  14. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  15. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  16. A satellite-based global landslide model

    Directory of Open Access Journals (Sweden)

    A. Farahmand

    2013-05-01

    Full Text Available Landslides are devastating phenomena that cause huge damage around the world. This paper presents a quasi-global landslide model derived using satellite precipitation data, land-use land cover maps, and 250 m topography information. This suggested landslide model is based on the Support Vector Machines (SVM, a machine learning algorithm. The National Aeronautics and Space Administration (NASA Goddard Space Flight Center (GSFC landslide inventory data is used as observations and reference data. In all, 70% of the data are used for model development and training, whereas 30% are used for validation and verification. The results of 100 random subsamples of available landslide observations revealed that the suggested landslide model can predict historical landslides reliably. The average error of 100 iterations of landslide prediction is estimated to be approximately 7%, while approximately 2% false landslide events are observed.

  17. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  18. Extending positive CLASS results across multiple instructors and multiple classes of Modeling Instruction

    Directory of Open Access Journals (Sweden)

    Eric Brewe

    2013-10-01

    Full Text Available We report on a multiyear study of student attitudes measured with the Colorado Learning Attitudes about Science Survey in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre- to postcourse. Cohen’s d effect sizes range from 0.08 to 0.95 for individual instructors. The average effect was d=0.45, with a 95% confidence interval of (0.26–0.64. These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.

  19. Model-Based Power Plant Master Control

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Katarina; Thomas, Jean; Funkquist, Jonas

    2010-08-15

    The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are

  20. Evaluation of template-based models in CASP8 with standard measures

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The strategy for evaluating template-based models submitted to CASP has continuously evolved from CASP1 to CASP5, leading to a standard procedure that has been used in all subsequent editions. The established approach includes methods for calculating the quality of each individual model, for assigning scores based on the distribution of the results for each target and for computing the statistical significance of the differences in scores between prediction methods. These data are made available to the assessor of the template-based modeling category, who uses them as a starting point for further evaluations and analyses. This article describes the detailed workflow of the procedure, provides justifications for a number of choices that are customarily made for CASP data evaluation, and reports the results of the analysis of template-based predictions at CASP8.

  1. INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...

    Science.gov (United States)

    Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  2. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  3. Energy-based numerical models for assessment of soil liquefaction

    Directory of Open Access Journals (Sweden)

    Amir Hossein Alavi

    2012-07-01

    Full Text Available This study presents promising variants of genetic programming (GP, namely linear genetic programming (LGP and multi expression programming (MEP to evaluate the liquefaction resistance of sandy soils. Generalized LGP and MEP-based relationships were developed between the strain energy density required to trigger liquefaction (capacity energy and the factors affecting the liquefaction characteristics of sands. The correlations were established based on well established and widely dispersed experimental results obtained from the literature. To verify the applicability of the derived models, they were employed to estimate the capacity energy values of parts of the test results that were not included in the analysis. The external validation of the models was verified using statistical criteria recommended by researchers. Sensitivity and parametric analyses were performed for further verification of the correlations. The results indicate that the proposed correlations are effectively capable of capturing the liquefaction resistance of a number of sandy soils. The developed correlations provide a significantly better prediction performance than the models found in the literature. Furthermore, the best LGP and MEP models perform superior than the optimal traditional GP model. The verification phases confirm the efficiency of the derived correlations for their general application to the assessment of the strain energy at the onset of liquefaction.

  4. A Novel Modeling Method for Aircraft Engine Using Nonlinear Autoregressive Exogenous (NARX) Models Based on Wavelet Neural Networks

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun; Cao, Can

    2018-05-01

    A novel modeling method for aircraft engine using nonlinear autoregressive exogenous (NARX) models based on wavelet neural networks is proposed. The identification principle and process based on wavelet neural networks are studied, and the modeling scheme based on NARX is proposed. Then, the time series data sets from three types of aircraft engines are utilized to build the corresponding NARX models, and these NARX models are validated by the simulation. The results show that all the best NARX models can capture the original aircraft engine's dynamic characteristic well with the high accuracy. For every type of engine, the relative identification errors of its best NARX model and the component level model are no more than 3.5 % and most of them are within 1 %.

  5. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...... is beneficial both in terms of reduced total modelling effort and confidence that the verification results are valid also for the implementation model. In this paper we introduce the concept of a descriptive specification model and an approach based on refining a descriptive model to target both verification...... how this model can be refined to target both verification and implementation....

  6. Quantitative Analysis of Intra Urban Growth Modeling using socio economic agents by combining cellular automata model with agent based model

    Science.gov (United States)

    Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.

    2017-12-01

    Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes

  7. Image Restoration Based on the Hybrid Total-Variation-Type Model

    Directory of Open Access Journals (Sweden)

    Baoli Shi

    2012-01-01

    Full Text Available We propose a hybrid total-variation-type model for the image restoration problem based on combining advantages of the ROF model with the LLT model. Since two L1-norm terms in the proposed model make it difficultly solved by using some classically numerical methods directly, we first employ the alternating direction method of multipliers (ADMM to solve a general form of the proposed model. Then, based on the ADMM and the Moreau-Yosida decomposition theory, a more efficient method called the proximal point method (PPM is proposed and the convergence of the proposed method is proved. Some numerical results demonstrate the viability and efficiency of the proposed model and methods.

  8. Evolution of the DeNOC-based dynamic modelling for multibody systems

    Directory of Open Access Journals (Sweden)

    S. K. Saha

    2013-01-01

    Full Text Available Dynamic modelling of a multibody system plays very essential role in its analyses. As a result, several methods for dynamic modelling have evolved over the years that allow one to analyse multibody systems in a very efficient manner. One such method of dynamic modelling is based on the concept of the Decoupled Natural Orthogonal Complement (DeNOC matrices. The DeNOC-based methodology for dynamics modelling, since its introduction in 1995, has been applied to a variety of multibody systems such as serial, parallel, general closed-loop, flexible, legged, cam-follower, and space robots. The methodology has also proven useful for modelling of proteins and hyper-degree-of-freedom systems like ropes, chains, etc. This paper captures the evolution of the DeNOC-based dynamic modelling applied to different type of systems, and its benefits over other existing methodologies. It is shown that the DeNOC-based modelling provides deeper understanding of the dynamics of a multibody system. The power of the DeNOC-based modelling has been illustrated using several numerical examples.

  9. Fuzzy Logic Based Set-Point Weighting Controller Tuning for an Internal Model Control Based PID Controller

    Directory of Open Access Journals (Sweden)

    Maruthai Suresh

    2009-10-01

    Full Text Available Controller tuning is the process of adjusting the parameters of the selected controller to achieve optimum response for the controlled process. For many of the control problems, a satisfactory performance is obtained by using PID controllers. One of the main problems with mathematical models of physical systems is that the parameters used in the models cannot be determined with absolute accuracy. The values of the parameters may change with time or various effects. In these cases, conventional controller tuning methods suffer when trying a lot to produce optimum response. In order to overcome these difficulties a fuzzy logic based Set- Point weighting controller tuning method is proposed. The effectiveness of the proposed scheme is analyzed through computer simulation using SIMULINK software and the results are presented. The fuzzy logic based simulation results are compared with Cohen-Coon (CC, Ziegler- Nichols (ZN, Ziegler – Nichols with Set- Point weighting (ZN-SPW, Internal Model Control (IMC and Internal model based PID controller responses (IMC-PID. The effects of process modeling errors and the importance of controller tuning have been brought out using the proposed control scheme.

  10. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  11. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  12. Model-based fault diagnosis in PEM fuel cell systems

    Energy Technology Data Exchange (ETDEWEB)

    Escobet, T; de Lira, S; Puig, V; Quevedo, J [Automatic Control Department (ESAII), Universitat Politecnica de Catalunya (UPC), Rambla Sant Nebridi 10, 08222 Terrassa (Spain); Feroldi, D; Riera, J; Serra, M [Institut de Robotica i Informatica Industrial (IRI), Consejo Superior de Investigaciones Cientificas (CSIC), Universitat Politecnica de Catalunya (UPC) Parc Tecnologic de Barcelona, Edifici U, Carrer Llorens i Artigas, 4-6, Planta 2, 08028 Barcelona (Spain)

    2009-07-01

    In this work, a model-based fault diagnosis methodology for PEM fuel cell systems is presented. The methodology is based on computing residuals, indicators that are obtained comparing measured inputs and outputs with analytical relationships, which are obtained by system modelling. The innovation of this methodology is based on the characterization of the relative residual fault sensitivity. To illustrate the results, a non-linear fuel cell simulator proposed in the literature is used, with modifications, to include a set of fault scenarios proposed in this work. Finally, it is presented the diagnosis results corresponding to these fault scenarios. It is remarkable that with this methodology it is possible to diagnose and isolate all the faults in the proposed set in contrast with other well known methodologies which use the binary signature matrix of analytical residuals and faults. (author)

  13. A comprehensive gaze stabilization controller based on cerebellar internal models

    DEFF Research Database (Denmark)

    Vannucci, Lorenzo; Falotico, Egidio; Tolu, Silvia

    2017-01-01

    . The VOR works in conjunction with the opto-kinetic reflex (OKR), which is a visual feedback mechanism that allows to move the eye at the same speed as the observed scene. Together they keep the image stationary on the retina. In this work we implement on a humanoid robot a model of gaze stabilization...... based on the coordination of VCR and VOR and OKR. The model, inspired by neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models. We present the results for the gaze stabilization model on three sets of experiments conducted on the SABIAN robot...

  14. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Mathematical Modeling of Column-Base Connections under Monotonic Loading

    Directory of Open Access Journals (Sweden)

    Gholamreza Abdollahzadeh

    2014-12-01

    Full Text Available Some considerable damage to steel structures during the Hyogo-ken Nanbu Earthquake occurred. Among them, many exposed-type column bases failed in several consistent patterns, such as brittle base plate fracture, excessive bolt elongation, unexpected early bolt failure, and inferior construction work, etc. The lessons from these phenomena led to the need for improved understanding of column base behavior. Joint behavior must be modeled when analyzing semi-rigid frames, which is associated with a mathematical model of the moment–rotation curve. The most accurate model uses continuous nonlinear functions. This article presents three areas of steel joint research: (1 analysis methods of semi-rigid joints; (2 prediction methods for the mechanical behavior of joints; (3 mathematical representations of the moment–rotation curve. In the current study, a new exponential model to depict the moment–rotation relationship of column base connection is proposed. The proposed nonlinear model represents an approach to the prediction of M–θ curves, taking into account the possible failure modes and the deformation characteristics of the connection elements. The new model has three physical parameters, along with two curve-fitted factors. These physical parameters are generated from dimensional details of the connection, as well as the material properties. The M–θ curves obtained by the model are compared with published connection tests and 3D FEM research. The proposed mathematical model adequately comes close to characterizing M–θ behavior through the full range of loading/rotations. As a result, modeling of column base connections using the proposed mathematical model can give crucial beforehand information, and overcome the disadvantages of time consuming workmanship and cost of experimental studies.

  17. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    Science.gov (United States)

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  18. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  19. Oil Spill Detection and Modelling: Preliminary Results for the Cercal Accident

    Science.gov (United States)

    da Costa, R. T.; Azevedo, A.; da Silva, J. C. B.; Oliveira, A.

    2013-03-01

    Oil spill research has significantly increased mainly as a result of the severe consequences experienced from industry accidents. Oil spill models are currently able to simulate the processes that determine the fate of oil slicks, playing an important role in disaster prevention, control and mitigation, generating valuable information for decision makers and the population in general. On the other hand, satellite Synthetic Aperture Radar (SAR) imagery has demonstrated significant potential in accidental oil spill detection, when they are accurately differentiated from look-alikes. The combination of both tools can lead to breakthroughs, particularly in the development of Early Warning Systems (EWS). This paper presents a hindcast simulation of the oil slick resulting from the Motor Tanker (MT) Cercal oil spill, listed by the Portuguese Navy as one of the major oil spills in the Portuguese Atlantic Coast. The accident took place nearby Leix˜oes Harbour, North of the Douro River, Porto (Portugal) on the 2nd of October 1994. The oil slick was segmented from available European Remote Sensing (ERS) satellite SAR images, using an algorithm based on a simplified version of the K-means clustering formulation. The image-acquired information, added to the initial conditions and forcings, provided the necessary inputs for the oil spill model. Simulations were made considering the tri-dimensional hydrodynamics in a crossscale domain, from the interior of the Douro River Estuary to the open-ocean on the Iberian Atlantic shelf. Atmospheric forcings (from ECMWF - the European Centre for Medium-Range Weather Forecasts and NOAA - the National Oceanic and Atmospheric Administration), river forcings (from SNIRH - the Portuguese National Information System of the Hydric Resources) and tidal forcings (from LNEC - the National Laboratory for Civil Engineering), including baroclinic gradients (NOAA), were considered. The lack of data for validation purposes only allowed the use of the

  20. Evidence-based quantification of uncertainties induced via simulation-based modeling

    International Nuclear Information System (INIS)

    Riley, Matthew E.

    2015-01-01

    The quantification of uncertainties in simulation-based modeling traditionally focuses upon quantifying uncertainties in the parameters input into the model, referred to as parametric uncertainties. Often neglected in such an approach are the uncertainties induced by the modeling process itself. This deficiency is often due to a lack of information regarding the problem or the models considered, which could theoretically be reduced through the introduction of additional data. Because of the nature of this epistemic uncertainty, traditional probabilistic frameworks utilized for the quantification of uncertainties are not necessarily applicable to quantify the uncertainties induced in the modeling process itself. This work develops and utilizes a methodology – incorporating aspects of Dempster–Shafer Theory and Bayesian model averaging – to quantify uncertainties of all forms for simulation-based modeling problems. The approach expands upon classical parametric uncertainty approaches, allowing for the quantification of modeling-induced uncertainties as well, ultimately providing bounds on classical probability without the loss of epistemic generality. The approach is demonstrated on two different simulation-based modeling problems: the computation of the natural frequency of a simple two degree of freedom non-linear spring mass system and the calculation of the flutter velocity coefficient for the AGARD 445.6 wing given a subset of commercially available modeling choices. - Highlights: • Modeling-induced uncertainties are often mishandled or ignored in the literature. • Modeling-induced uncertainties are epistemic in nature. • Probabilistic representations of modeling-induced uncertainties are restrictive. • Evidence theory and Bayesian model averaging are integrated. • Developed approach is applicable for simulation-based modeling problems

  1. Modeling, Design, and Implementation of a Cloud Workflow Engine Based on Aneka

    OpenAIRE

    Zhou, Jiantao; Sun, Chaoxin; Fu, Weina; Liu, Jing; Jia, Lei; Tan, Hongyan

    2014-01-01

    This paper presents a Petri net-based model for cloud workflow which plays a key role in industry. Three kinds of parallelisms in cloud workflow are characterized and modeled. Based on the analysis of the modeling, a cloud workflow engine is designed and implemented in Aneka cloud environment. The experimental results validate the effectiveness of our approach of modeling, design, and implementation of cloud workflow.

  2. Boundary-layer transition prediction using a simplified correlation-based model

    Directory of Open Access Journals (Sweden)

    Xia Chenchao

    2016-02-01

    Full Text Available This paper describes a simplified transition model based on the recently developed correlation-based γ-Reθt transition model. The transport equation of transition momentum thickness Reynolds number is eliminated for simplicity, and new transition length function and critical Reynolds number correlation are proposed. The new model is implemented into an in-house computational fluid dynamics (CFD code and validated for low and high-speed flow cases, including the zero pressure flat plate, airfoils, hypersonic flat plate and double wedge. Comparisons between the simulation results and experimental data show that the boundary-layer transition phenomena can be reasonably illustrated by the new model, which gives rise to significant improvements over the fully laminar and fully turbulent results. Moreover, the new model has comparable features of accuracy and applicability when compared with the original γ-Reθt model. In the meantime, the newly proposed model takes only one transport equation of intermittency factor and requires fewer correlations, which simplifies the original model greatly. Further studies, especially on separation-induced transition flows, are required for the improvement of the new model.

  3. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  4. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    Science.gov (United States)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  5. Cost Effective Community Based Dementia Screening: A Markov Model Simulation

    Directory of Open Access Journals (Sweden)

    Erin Saito

    2014-01-01

    Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.

  6. Introducing Waqf Based Takaful Model in India

    Directory of Open Access Journals (Sweden)

    Syed Ahmed Salman

    2014-03-01

    Full Text Available Objective – Waqf is a unique feature of the socioeconomic system of Islam in a multi- religious and developing country like India. India is a rich country with waqf assets. The history of waqf in India can be traced back to 800 years ago. Most of the researchers, suggest how waqf can be used a tool to mitigate the poverty of Muslims. India has the third highest Muslim population after Indonesia and Pakistan. However, the majority of Muslims belong to the low income group and they are in need of help. It is believed that waqf can be utilized for the betterment of Indian Muslim community. Among the available uses of waqf assets, the main objective of this paper is to introduce waqf based takaful model in India. In addition, how this proposed model can be adopted in India is highlighted.Methods – Library research is applied since this paper relies on secondary data by thoroughlyreviewing the most relevant literature.Result – India as a rich country with waqf assets should fully utilize the resources to help the Muslims through takaful.Conclusion – In this study, we have proposed waqf based takaful model with the combination of the concepts mudarabah and wakalah for India. We recommend this model based on the background of the  country and situations. Since we have not tested the viability of this model in India, future research should be continued on this testing.Keywords : Wakaf, Takaful, Kemiskinan dan India

  7. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    Science.gov (United States)

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  8. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    Science.gov (United States)

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed

  9. Physically-based modelling of polycrystalline semiconductor devices

    International Nuclear Information System (INIS)

    Lee, S.

    2000-01-01

    Thin-film technology using polycrystalline semiconductors has been widely applied to active-matrix-addressed liquid crystal displays (AMLCDs) where thin-film transistors act as digital pixel switches. Research and development is in progress to integrate the driver circuits around the peripheral of the display, resulting in significant cost reduction of connections between rows and columns and the peripheral circuitry. For this latter application, where for instance it is important to control the greyscale voltage level delivered to the pixel, an understanding of device behaviour is required so that models can be developed for analogue circuit simulation. For this purpose, various analytical models have been developed based on that of Seto who considered the effect of monoenergetic trap states and grain boundaries in polycrystalline materials but not the contribution of the grains to the electrical properties. The principal aim of this thesis is to describe the use of a numerical device simulator (ATLAS) as a tool to investigate the physics of the trapping process involved in the device operation, which additionally takes into account the effect of multienergetic trapping levels and the contribution of the grain into the modelling. A study of the conventional analytical models is presented, and an alternative approach is introduced which takes into account the grain regions to enhance the accuracy of the analytical modelling. A physically-based discrete-grain-boundary model and characterisation method are introduced to study the effects of the multienergetic trap states on the electrical characteristics of poly-TFTs using CdSe devices as the experimental example, and the electrical parameters such as the density distribution of the trapping states are extracted. The results show excellent agreement between the simulation and experimental data. The limitations of this proposed physical model are also studied and discussed. (author)

  10. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  11. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  12. Intelligent-based Structural Damage Detection Model

    International Nuclear Information System (INIS)

    Lee, Eric Wai Ming; Yu, K.F.

    2010-01-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  13. Intelligent-based Structural Damage Detection Model

    Science.gov (United States)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  14. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  15. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  16. Hot news recommendation system from heterogeneous websites based on bayesian model.

    Science.gov (United States)

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  17. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  18. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  19. An Industrial Model Based Disturbance Feedback Control Scheme

    DEFF Research Database (Denmark)

    Kawai, Fukiko; Nakazawa, Chikashi; Vinther, Kasper

    2014-01-01

    This paper presents a model based disturbance feedback control scheme. Industrial process systems have been traditionally controlled by using relay and PID controller. However these controllers are affected by disturbances and model errors and these effects degrade control performance. The authors...... propose a new control method that can decrease the negative impact of disturbance and model errors. The control method is motivated by industrial practice by Fuji Electric. Simulation tests are examined with a conventional PID controller and the disturbance feedback control. The simulation results...

  20. Predictive sensor based x-ray calibration using a physical model

    International Nuclear Information System (INIS)

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-01-01

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  1. Electrochemical model of the polyaniline based organic memristive device

    International Nuclear Information System (INIS)

    Demin, V. A.; Erokhin, V. V.; Kashkarov, P. K.; Kovalchuk, M. V.

    2014-01-01

    The electrochemical organic memristive device with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, including the neuromorphic networks capable for learning. In this work, a new theoretical model of the polyaniline memristive is presented. The developed model of organic memristive functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment but also the quantitative similarities of the resultant current values. It is shown how the memristive could behave at zero potential difference relative to the reference electrode. This improved model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  2. A New Method for a Virtue-Based Responsible Conduct of Research Curriculum: Pilot Test Results.

    Science.gov (United States)

    Berling, Eric; McLeskey, Chet; O'Rourke, Michael; Pennock, Robert T

    2018-02-03

    Drawing on Pennock's theory of scientific virtues, we are developing an alternative curriculum for training scientists in the responsible conduct of research (RCR) that emphasizes internal values rather than externally imposed rules. This approach focuses on the virtuous characteristics of scientists that lead to responsible and exemplary behavior. We have been pilot-testing one element of such a virtue-based approach to RCR training by conducting dialogue sessions, modeled upon the approach developed by Toolbox Dialogue Initiative, that focus on a specific virtue, e.g., curiosity and objectivity. During these structured discussions, small groups of scientists explore the roles they think the focus virtue plays and should play in the practice of science. Preliminary results have shown that participants strongly prefer this virtue-based model over traditional methods of RCR training. While we cannot yet definitively say that participation in these RCR sessions contributes to responsible conduct, these pilot results are encouraging and warrant continued development of this virtue-based approach to RCR training.

  3. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...

  4. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  5. A Kriging Model Based Finite Element Model Updating Method for Damage Detection

    Directory of Open Access Journals (Sweden)

    Xiuming Yang

    2017-10-01

    Full Text Available Model updating is an effective means of damage identification and surrogate modeling has attracted considerable attention for saving computational cost in finite element (FE model updating, especially for large-scale structures. In this context, a surrogate model of frequency is normally constructed for damage identification, while the frequency response function (FRF is rarely used as it usually changes dramatically with updating parameters. This paper presents a new surrogate model based model updating method taking advantage of the measured FRFs. The Frequency Domain Assurance Criterion (FDAC is used to build the objective function, whose nonlinear response surface is constructed by the Kriging model. Then, the efficient global optimization (EGO algorithm is introduced to get the model updating results. The proposed method has good accuracy and robustness, which have been verified by a numerical simulation of a cantilever and experimental test data of a laboratory three-story structure.

  6. Graph configuration model based evaluation of the education-occupation match.

    Science.gov (United States)

    Gadar, Laszlo; Abonyi, Janos

    2018-01-01

    To study education-occupation matchings we developed a bipartite network model of education to work transition and a graph configuration model based metric. We studied the career paths of 15 thousand Hungarian students based on the integrated database of the National Tax Administration, the National Health Insurance Fund, and the higher education information system of the Hungarian Government. A brief analysis of gender pay gap and the spatial distribution of over-education is presented to demonstrate the background of the research and the resulted open dataset. We highlighted the hierarchical and clustered structure of the career paths based on the multi-resolution analysis of the graph modularity. The results of the cluster analysis can support policymakers to fine-tune the fragmented program structure of higher education.

  7. Numeric, Agent-based or System dynamics model? Which modeling approach is the best for vast population simulation?

    Science.gov (United States)

    Cimler, Richard; Tomaskova, Hana; Kuhnova, Jitka; Dolezal, Ondrej; Pscheidl, Pavel; Kuca, Kamil

    2018-02-01

    Alzheimer's disease is one of the most common mental illnesses. It is posited that more than 25 % of the population is affected by some mental disease during their lifetime. Treatment of each patient draws resources from the economy concerned. Therefore, it is important to quantify the potential economic impact. Agent-based, system dynamics and numerical approaches to dynamic modeling of the population of the European Union and its patients with Alzheimer's disease are presented in this article. Simulations, their characteristics, and the results from different modeling tools are compared. The results of these approaches are compared with EU population growth predictions from the statistical office of the EU by Eurostat. The methodology of a creation of the models is described and all three modeling approaches are compared. The suitability of each modeling approach for the population modeling is discussed. In this case study, all three approaches gave us the results corresponding with the EU population prediction. Moreover, we were able to predict the number of patients with AD and, based on the modeling method, we were also able to monitor different characteristics of the population. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  9. Modeling of memristor-based chaotic systems using nonlinear Wiener adaptive filters based on backslash operator

    International Nuclear Information System (INIS)

    Zhao, Yibo; Jiang, Yi; Feng, Jiuchao; Wu, Lifu

    2016-01-01

    Highlights: • A novel nonlinear Wiener adaptive filters based on the backslash operator are proposed. • The identification approach to the memristor-based chaotic systems using the proposed adaptive filters. • The weight update algorithm and convergence characteristics for the proposed adaptive filters are derived. - Abstract: Memristor-based chaotic systems have complex dynamical behaviors, which are characterized as nonlinear and hysteresis characteristics. Modeling and identification of their nonlinear model is an important premise for analyzing the dynamical behavior of the memristor-based chaotic systems. This paper presents a novel nonlinear Wiener adaptive filtering identification approach to the memristor-based chaotic systems. The linear part of Wiener model consists of the linear transversal adaptive filters, the nonlinear part consists of nonlinear adaptive filters based on the backslash operator for the hysteresis characteristics of the memristor. The weight update algorithms for the linear and nonlinear adaptive filters are derived. Final computer simulation results show the effectiveness as well as fast convergence characteristics. Comparing with the adaptive nonlinear polynomial filters, the proposed nonlinear adaptive filters have less identification error.

  10. State-space modelling for the ejector-based refrigeration system driven by low grade energy

    International Nuclear Information System (INIS)

    Xue, Binqiang; Cai, Wenjian; Wang, Xinli

    2015-01-01

    This paper presents a novel global state-space model to describe the ejector-based refrigeration system, which includes the dynamics of the two heat exchangers and the static properties of ejector, compressor and expansion valve. Different from the existing methods, the proposed method introduces some intermediate variables into the dynamic modelling in developing reduced order models of the heat exchangers (evaporator and condenser) based on the Number of Transfer Units (NTU) method. This global model with fewer dimensions is much simpler and can be more convenient for the real-time control system design, compared with other dynamic models. Finally, the proposed state-space model has been validated by dynamic response experiments on the ejector-based refrigeration cycle with refrigerant R134a.The experimental results indicate that the proposed model can predict well the dynamics of the ejector-based refrigeration system. - Highlights: • A low-order state-space model of ejector-based refrigeration system is presented. • Reduced-order models of heat exchangers are developed based on NTU method. • The variations of mass flow rates are introduced in multiple fluid phase regions. • Experimental results show the proposed model has a good performance

  11. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  12. Simulation-Based Dynamic Passenger Flow Assignment Modelling for a Schedule-Based Transit Network

    Directory of Open Access Journals (Sweden)

    Xiangming Yao

    2017-01-01

    Full Text Available The online operation management and the offline policy evaluation in complex transit networks require an effective dynamic traffic assignment (DTA method that can capture the temporal-spatial nature of traffic flows. The objective of this work is to propose a simulation-based dynamic passenger assignment framework and models for such applications in the context of schedule-based rail transit systems. In the simulation framework, travellers are regarded as individual agents who are able to obtain complete information on the current traffic conditions. A combined route selection model integrated with pretrip route selection and entrip route switch is established for achieving the dynamic network flow equilibrium status. The train agent is operated strictly with the timetable and its capacity limitation is considered. A continuous time-driven simulator based on the proposed framework and models is developed, whose performance is illustrated through a large-scale network of Beijing subway. The results indicate that more than 0.8 million individual passengers and thousands of trains can be simulated simultaneously at a speed ten times faster than real time. This study provides an efficient approach to analyze the dynamic demand-supply relationship for large schedule-based transit networks.

  13. Comparison of results of experimental research with numerical calculations of a model one-sided seal

    Directory of Open Access Journals (Sweden)

    Joachimiak Damian

    2015-06-01

    Full Text Available Paper presents the results of experimental and numerical research of a model segment of a labyrinth seal for a different wear level. The analysis covers the extent of leakage and distribution of static pressure in the seal chambers and the planes upstream and downstream of the segment. The measurement data have been compared with the results of numerical calculations obtained using commercial software. Based on the flow conditions occurring in the area subjected to calculations, the size of the mesh defined by parameter y+ has been analyzed and the selection of the turbulence model has been described. The numerical calculations were based on the measurable thermodynamic parameters in the seal segments of steam turbines. The work contains a comparison of the mass flow and distribution of static pressure in the seal chambers obtained during the measurement and calculated numerically in a model segment of the seal of different level of wear.

  14. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

    2016-01-01

    To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  15. A Process-Based Transport-Distance Model of Aeolian Transport

    Science.gov (United States)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  16. COMPARISON of FUZZY-BASED MODELS in LANDSLIDE HAZARD MAPPING

    Directory of Open Access Journals (Sweden)

    N. Mijani

    2017-09-01

    Full Text Available Landslide is one of the main geomorphic processes which effects on the development of prospect in mountainous areas and causes disastrous accidents. Landslide is an event which has different uncertain criteria such as altitude, slope, aspect, land use, vegetation density, precipitation, distance from the river and distance from the road network. This research aims to compare and evaluate different fuzzy-based models including Fuzzy Analytic Hierarchy Process (Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR. The main contribution of this paper reveals to the comprehensive criteria causing landslide hazard considering their uncertainties and comparison of different fuzzy-based models. The quantify of evaluation process are calculated by Density Ratio (DR and Quality Sum (QS. The proposed methodology implemented in Sari, one of the city of Iran which has faced multiple landslide accidents in recent years due to the particular environmental conditions. The achieved results of accuracy assessment based on the quantifier strated that Fuzzy-AHP model has higher accuracy compared to other two models in landslide hazard zonation. Accuracy of zoning obtained from Fuzzy-AHP model is respectively 0.92 and 0.45 based on method Precision (P and QS indicators. Based on obtained landslide hazard maps, Fuzzy-AHP, Fuzzy Gamma and Fuzzy-OR respectively cover 13, 26 and 35 percent of the study area with a very high risk level. Based on these findings, fuzzy-AHP model has been selected as the most appropriate method of zoning landslide in the city of Sari and the Fuzzy-gamma method with a minor difference is in the second order.

  17. Model based process-product design and analysis

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....

  18. An animal model of schizophrenia based on chronic LSD administration: old idea, new results.

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D; Nichols, David E

    2011-09-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as "clearly a paranoid state." We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT(2A) receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. AN ANIMAL MODEL OF SCHIZOPHRENIA BASED ON CHRONIC LSD ADMINISTRATION: OLD IDEA, NEW RESULTS

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D.; Nichols, David E.

    2011-01-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as “clearly a paranoid state.” We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT2A receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. PMID:21352832

  20. Modeling of Mixing Behavior in a Combined Blowing Steelmaking Converter with a Filter-Based Euler-Lagrange Model

    Science.gov (United States)

    Li, Mingming; Li, Lin; Li, Qiang; Zou, Zongshu

    2018-05-01

    A filter-based Euler-Lagrange multiphase flow model is used to study the mixing behavior in a combined blowing steelmaking converter. The Euler-based volume of fluid approach is employed to simulate the top blowing, while the Lagrange-based discrete phase model that embeds the local volume change of rising bubbles for the bottom blowing. A filter-based turbulence method based on the local meshing resolution is proposed aiming to improve the modeling of turbulent eddy viscosities. The model validity is verified through comparison with physical experiments in terms of mixing curves and mixing times. The effects of the bottom gas flow rate on bath flow and mixing behavior are investigated and the inherent reasons for the mixing result are clarified in terms of the characteristics of bottom-blowing plumes, the interaction between plumes and top-blowing jets, and the change of bath flow structure.

  1. Research of Coal Resources Reserves Prediction Based on GM (1, 1) Model

    Science.gov (United States)

    Xiao, Jiancheng

    2018-01-01

    Based on the forecast of China’s coal reserves, this paper uses the GM (1, 1) gray forecasting theory to establish the gray forecasting model of China’s coal reserves based on the data of China’s coal reserves from 2002 to 2009, and obtained the trend of coal resources reserves with the current economic and social development situation, and the residual test model is established, so the prediction model is more accurate. The results show that China’s coal reserves can ensure the use of production at least 300 years of use. And the results are similar to the mainstream forecast results, and that are in line with objective reality.

  2. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    Science.gov (United States)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  3. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    Science.gov (United States)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  4. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  5. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  6. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  7. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  8. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  9. Research on Model-Based Fault Diagnosis for a Gas Turbine Based on Transient Performance

    Directory of Open Access Journals (Sweden)

    Detang Zeng

    2018-01-01

    Full Text Available It is essential to monitor and to diagnose faults in rotating machinery with a high thrust–weight ratio and complex structure for a variety of industrial applications, for which reliable signal measurements are required. However, the measured values consist of the true values of the parameters, the inertia of measurements, random errors and systematic errors. Such signals cannot reflect the true performance state and the health state of rotating machinery accurately. High-quality, steady-state measurements are necessary for most current diagnostic methods. Unfortunately, it is hard to obtain these kinds of measurements for most rotating machinery. Diagnosis based on transient performance is a useful tool that can potentially solve this problem. A model-based fault diagnosis method for gas turbines based on transient performance is proposed in this paper. The fault diagnosis consists of a dynamic simulation model, a diagnostic scheme, and an optimization algorithm. A high-accuracy, nonlinear, dynamic gas turbine model using a modular modeling method is presented that involves thermophysical properties, a component characteristic chart, and system inertial. The startup process is simulated using this model. The consistency between the simulation results and the field operation data shows the validity of the model and the advantages of transient accumulated deviation. In addition, a diagnostic scheme is designed to fulfill this process. Finally, cuckoo search is selected to solve the optimization problem in fault diagnosis. Comparative diagnostic results for a gas turbine before and after washing indicate the improved effectiveness and accuracy of the proposed method of using data from transient processes, compared with traditional methods using data from the steady state.

  10. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  11. A cavitation model based on Eulerian stochastic fields

    Science.gov (United States)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  12. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  13. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    Science.gov (United States)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  14. Laos Organization Name Using Cascaded Model Based on SVM and CRF

    Directory of Open Access Journals (Sweden)

    Duan Shaopeng

    2017-01-01

    Full Text Available According to the characteristics of Laos organization name, this paper proposes a two layer model based on conditional random field (CRF and support vector machine (SVM for Laos organization name recognition. A layer of model uses CRF to recognition simple organization name, and the result is used to support the decision of the second level. Based on the driving method, the second layer uses SVM and CRF to recognition the complicated organization name. Finally, the results of the two levels are combined, And by a subsequent treatment to correct results of low confidence recognition. The results show that this approach based on SVM and CRF is efficient in recognizing organization name through open test for real linguistics, and the recalling rate achieve 80. 83%and the precision rate achieves 82. 75%.

  15. Steady Modeling for an Ammonia Synthesis Reactor Based on a Novel CDEAS-LS-SVM Model

    Directory of Open Access Journals (Sweden)

    Zhuoqian Liu

    2014-01-01

    Full Text Available A steady-state mathematical model is built in order to represent plant behavior under stationary operating conditions. A novel modeling using LS-SVR based on Cultural Differential Evolution with Ant Search is proposed. LS-SVM is adopted to establish the model of the net value of ammonia. The modeling method has fast convergence speed and good global adaptability for identification of the ammonia synthesis process. The LS-SVR model was established using the above-mentioned method. Simulation results verify the validity of the method.

  16. Electrochemistry-based Battery Modeling for Prognostics

    Science.gov (United States)

    Daigle, Matthew J.; Kulkarni, Chetan Shrikant

    2013-01-01

    Batteries are used in a wide variety of applications. In recent years, they have become popular as a source of power for electric vehicles such as cars, unmanned aerial vehicles, and commericial passenger aircraft. In such application domains, it becomes crucial to both monitor battery health and performance and to predict end of discharge (EOD) and end of useful life (EOL) events. To implement such technologies, it is crucial to understand how batteries work and to capture that knowledge in the form of models that can be used by monitoring, diagnosis, and prognosis algorithms. In this work, we develop electrochemistry-based models of lithium-ion batteries that capture the significant electrochemical processes, are computationally efficient, capture the effects of aging, and are of suitable accuracy for reliable EOD prediction in a variety of usage profiles. This paper reports on the progress of such a model, with results demonstrating the model validity and accurate EOD predictions.

  17. Repulsion-based model for contact angle saturation in electrowetting.

    Science.gov (United States)

    Ali, Hassan Abdelmoumen Abdellah; Mohamed, Hany Ahmed; Abdelgawad, Mohamed

    2015-01-01

    We introduce a new model for contact angle saturation phenomenon in electrowetting on dielectric systems. This new model attributes contact angle saturation to repulsion between trapped charges on the cap and base surfaces of the droplet in the vicinity of the three-phase contact line, which prevents these surfaces from converging during contact angle reduction. This repulsion-based saturation is similar to repulsion between charges accumulated on the surfaces of conducting droplets which causes the well known Coulombic fission and Taylor cone formation phenomena. In our model, both the droplet and dielectric coating were treated as lossy dielectric media (i.e., having finite electrical conductivities and permittivities) contrary to the more common assumption of a perfectly conducting droplet and perfectly insulating dielectric. We used theoretical analysis and numerical simulations to find actual charge distribution on droplet surface, calculate repulsion energy, and minimize energy of the total system as a function of droplet contact angle. Resulting saturation curves were in good agreement with previously reported experimental results. We used this proposed model to predict effect of changing liquid properties, such as electrical conductivity, and system parameters, such as thickness of the dielectric layer, on the saturation angle, which also matched experimental results.

  18. The Martian Water Cycle Based on 3-D Modeling

    Science.gov (United States)

    Houben, H.; Haberle, R. M.; Joshi, M. M.

    1999-01-01

    Understanding the distribution of Martian water is a major goal of the Mars Surveyor program. However, until the bulk of the data from the nominal missions of TES, PMIRR, GRS, MVACS, and the DS2 probes are available, we are bound to be in a state where much of our knowledge of the seasonal behavior of water is based on theoretical modeling. We therefore summarize the results of this modeling at the present time. The most complete calculations come from a somewhat simplified treatment of the Martian climate system which is capable of simulating many decades of weather. More elaborate meteorological models are now being applied to study of the problem. The results show a high degree of consistency with observations of aspects of the Martian water cycle made by Viking MAWD, a large number of ground-based measurements of atmospheric column water vapor, studies of Martian frosts, and the widespread occurrence of water ice clouds. Additional information is contained in the original extended abstract.

  19. A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function

    Science.gov (United States)

    Röhrle, O.; Davidson, J. B.; Pullan, A. J.

    2012-01-01

    Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509

  20. A physiologically based, multi-scale model of skeletal muscle structure and function

    Directory of Open Access Journals (Sweden)

    Oliver eRöhrle

    2012-09-01

    Full Text Available Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle's response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modelling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle's response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modelling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibres and their grouping. Together with a well-established model of motor unit recruitment, the electro-physiological behaviour of single muscle fibres within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenisation. The effect of homogenisation has been investigated by varying the number of embedded skeletal muscle fibres and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the Tibialis Anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modelling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behaviour ranging from motor unit recruitment to force generation and fatigue.

  1. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  2. Nitrous Oxide Production in a Granule-based Partial Nitritation Reactor: A Model-based Evaluation.

    Science.gov (United States)

    Peng, Lai; Sun, Jing; Liu, Yiwen; Dai, Xiaohu; Ni, Bing-Jie

    2017-04-03

    Sustainable wastewater treatment has been attracting increasing attentions over the past decades. However, the production of nitrous oxide (N 2 O), a potent GHG, from the energy-efficient granule-based autotrophic nitrogen removal is largely unknown. This study applied a previously established N 2 O model, which incorporated two N 2 O production pathways by ammonia-oxidizing bacteria (AOB) (AOB denitrification and the hydroxylamine (NH 2 OH) oxidation). The two-pathway model was used to describe N 2 O production from a granule-based partial nitritation (PN) reactor and provide insights into the N 2 O distribution inside granules. The model was evaluated by comparing simulation results with N 2 O monitoring profiles as well as isotopic measurement data from the PN reactor. The model demonstrated its good predictive ability against N 2 O dynamics and provided useful information about the shift of N 2 O production pathways inside granules for the first time. The simulation results indicated that the increase of oxygen concentration and granule size would significantly enhance N 2 O production. The results further revealed a linear relationship between N 2 O production and ammonia oxidation rate (AOR) (R 2  = 0.99) under the conditions of varying oxygen levels and granule diameters, suggesting that bulk oxygen and granule size may exert an indirect effect on N 2 O production by causing a change in AOR.

  3. Smooth Adaptive Internal Model Control Based on U Model for Nonlinear Systems with Dynamic Uncertainties

    Directory of Open Access Journals (Sweden)

    Li Zhao

    2016-01-01

    Full Text Available An improved smooth adaptive internal model control based on U model control method is presented to simplify modeling structure and parameter identification for a class of uncertain dynamic systems with unknown model parameters and bounded external disturbances. Differing from traditional adaptive methods, the proposed controller can simplify the identification of time-varying parameters in presence of bounded external disturbances. Combining the small gain theorem and the virtual equivalent system theory, learning rate of smooth adaptive internal model controller has been analyzed and the closed-loop virtual equivalent system based on discrete U model has been constructed as well. The convergence of this virtual equivalent system is proved, which further shows the convergence of the complex closed-loop discrete U model system. Finally, simulation and experimental results on a typical nonlinear dynamic system verified the feasibility of the proposed algorithm. The proposed method is shown to have lighter identification burden and higher control accuracy than the traditional adaptive controller.

  4. Model-based reasoning and the control of process plants

    International Nuclear Information System (INIS)

    Vaelisuo, Heikki

    1993-02-01

    In addition to feedback control, safe and economic operation of industrial process plants requires discrete-event type logic control like for example automatic control sequences, interlocks, etc. A lot of complex routine reasoning is involved in the design and verification and validation (VandV) of such automatics. Similar reasoning tasks are encountered during plant operation in action planning and fault diagnosis. The low-level part of the required problem solving is so straightforward that it could be accomplished by a computer if only there were plant models which allow versatile mechanised reasoning. Such plant models and corresponding inference algorithms are the main subject of this report. Deep knowledge and qualitative modelling play an essential role in this work. Deep knowledge refers to mechanised reasoning based on the first principles of the phenomena in the problem domain. Qualitative modelling refers to knowledge representation formalism and related reasoning methods which allow solving problems on an abstraction level higher than for example traditional simulation and optimisation. Prolog is a commonly used platform for artificial intelligence (Al) applications. Constraint logic languages like CLP(R) and Prolog-III extend the scope of logic programming to numeric problem solving. In addition they allow a programming style which often reduces the computational complexity significantly. An approach to model-based reasoning implemented in constraint logic programming language CLP(R) is presented. The approach is based on some of the principles of QSIM, an algorithm for qualitative simulation. It is discussed how model-based reasoning can be applied in the design and VandV of plant automatics and in action planning during plant operation. A prototype tool called ISIR is discussed and some initial results obtained during the development of the tool are presented. The results presented originate from preliminary test results of the prototype obtained

  5. An Individual-based Probabilistic Model for Fish Stock Simulation

    Directory of Open Access Journals (Sweden)

    Federico Buti

    2010-08-01

    Full Text Available We define an individual-based probabilistic model of a sole (Solea solea behaviour. The individual model is given in terms of an Extended Probabilistic Discrete Timed Automaton (EPDTA, a new formalism that is introduced in the paper and that is shown to be interpretable as a Markov decision process. A given EPDTA model can be probabilistically model-checked by giving a suitable translation into syntax accepted by existing model-checkers. In order to simulate the dynamics of a given population of soles in different environmental scenarios, an agent-based simulation environment is defined in which each agent implements the behaviour of the given EPDTA model. By varying the probabilities and the characteristic functions embedded in the EPDTA model it is possible to represent different scenarios and to tune the model itself by comparing the results of the simulations with real data about the sole stock in the North Adriatic sea, available from the recent project SoleMon. The simulator is presented and made available for its adaptation to other species.

  6. Model based risk assessment - the CORAS framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune; Thunem, Atoosa P-J.

    2004-04-15

    Traditional risk analysis and assessment is based on failure-oriented models of the system. In contrast to this, model-based risk assessment (MBRA) utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The target models are then used as input sources for complementary risk analysis and assessment techniques, as well as a basis for the documentation of the assessment results. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tested with successful outcome through a series of seven trial within the telemedicine and ecommerce areas. The CORAS project in general and the CORAS application of MBRA in particular have contributed positively to the visibility of model-based risk assessment and thus to the disclosure of several potentials for further exploitation of various aspects within this important research field. In that connection, the CORAS methodology's possibilities for further improvement towards utilization in more complex architectures and also in other application domains such as the nuclear field can be addressed. The latter calls for adapting the framework to address nuclear standards such as IEC 60880 and IEC 61513. For this development we recommend applying a trial driven approach within the nuclear field. The tool supported approach for combining risk analysis and system development also fits well with the HRP proposal for developing an Integrated Design Environment (IDE) providing efficient methods and tools to support control room systems design. (Author)

  7. Probabilistic reasoning for assembly-based 3D modeling

    KAUST Repository

    Chaudhuri, Siddhartha; Kalogerakis, Evangelos; Guibas, Leonidas; Koltun, Vladlen

    2011-01-01

    Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling

  8. Does Accrual Management Impair the Performance of Earnings-Based Valuation Models?

    OpenAIRE

    Lucie Courteau; Jennifer L. Kao; Yao Tian

    2013-01-01

    This study examines empirically how the presence of accrual management may affect firm valuation. We compare the performance of earnings-based and non-earnings-based valuation models, represented by Residual Income Model (RIM) and Discounted Cash Flow (DCF), respectively, based on the absolute percentage pricing and valuation errors for two subsets of US firms: “Suspect” firms that are likely to have engaged in accrual management and “Normal” firms matched on industry, year and size. Results ...

  9. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  10. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  11. Model-based safety architecture framework for complex systems

    NARCIS (Netherlands)

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  12. Employing Model-Based Reasoning in Interdisciplinary Research Teams: Evidence-Based Practices for Integrating Knowledge Across Systems

    Science.gov (United States)

    Pennington, D. D.; Vincent, S.

    2017-12-01

    The NSF-funded project "Employing Model-Based Reasoning in Socio-Environmental Synthesis (EMBeRS)" has developed a generic model for exchanging knowledge across disciplines that is based on findings from the cognitive, learning, social, and organizational sciences addressing teamwork in complex problem solving situations. Two ten-day summer workshops for PhD students from large, NSF-funded interdisciplinary projects working on a variety of water issues were conducted in 2016 and 2017, testing the model by collecting a variety of data, including surveys, interviews, audio/video recordings, material artifacts and documents, and photographs. This presentation will introduce the EMBeRS model, the design of workshop activities based on the model, and results from surveys and interviews with the participating students. Findings suggest that this approach is very effective for developing a shared, integrated research vision across disciplines, compared with activities typically provided by most large research projects, and that students believe the skills developed in the EMBeRS workshops are unique and highly desireable.

  13. Agent-Based Computational Modeling of Cell Culture ...

    Science.gov (United States)

    Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assumed a “fried egg shape” but became increasingly cuboidal with increasing confluency. The surface area presented by each cell to the overlying medium varies from cell-to-cell and is a determinant of diffusional flux of toxicant from the medium into the cell. Thus, dose varies among cells for a given concentration of toxicant in the medium. Computer code describing diffusion of H2O2 from medium into each cell and clearance of H2O2 was calibrated against H2O2 time-course data (25, 50, or 75 uM H2O2 for 60 min) obtained with the Amplex Red assay for the medium and the H2O2-sensitive fluorescent reporter, HyPer, for cytosol. Cellular H2O2 concentrations peaked at about 5 min and were near baseline by 10 min. The model predicted a skewed distribution of surface areas, with between cell variation usually 2 fold or less. Predicted variability in cellular dose was in rough agreement with the variation in the HyPer data. These results are preliminary, as the model was not calibrated to the morphology of a specific cell type. Future work will involve morphology model calibration against human bronchial epithelial (BEAS-2B) cells. Our results show, however, the potential of agent-based modeling

  14. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  15. Agent Based Modeling on Organizational Dynamics of Terrorist Network

    Directory of Open Access Journals (Sweden)

    Bo Li

    2015-01-01

    Full Text Available Modeling organizational dynamics of terrorist network is a critical issue in computational analysis of terrorism research. The first step for effective counterterrorism and strategic intervention is to investigate how the terrorists operate with the relational network and what affects the performance. In this paper, we investigate the organizational dynamics by employing a computational experimentation methodology. The hierarchical cellular network model and the organizational dynamics model are developed for modeling the hybrid relational structure and complex operational processes, respectively. To intuitively elucidate this method, the agent based modeling is used to simulate the terrorist network and test the performance in diverse scenarios. Based on the experimental results, we show how the changes of operational environments affect the development of terrorist organization in terms of its recovery and capacity to perform future tasks. The potential strategies are also discussed, which can be used to restrain the activities of terrorists.

  16. Results-based Rewards - Leveraging Wage Increases?

    DEFF Research Database (Denmark)

    Bregn, Kirsten

    2005-01-01

    A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages with a g......A good seven years ago, as a part of a large-scale pay reform, the Danish public sector introduced results-based rewards (RBR), i.e. a pay component awarded for achieving or exceeding targets set in advance. RBR represent a possibility for combining wage-earners interests in higher wages...... limited use of RBR, illustrated with examples. The Danish experiences should give food for thought, given that pay systems used by the public sector are currently under transformation in practically all OECD countries....

  17. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  18. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  19. An interactive web-based extranet system model for managing ...

    African Journals Online (AJOL)

    ... objectives for students, lecturers and parents to access and compute results ... The database will serve as repository of students' academic records over a ... Keywords: Extranet-Model, Interactive, Web-Based, Students, Academic, Records ...

  20. Grid-based modeling for land use planning and environmental resource mapping.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. A.

    1999-08-04

    Geographic Information System (GIS) technology is used by land managers and natural resource planners for examining resource distribution and conducting project planning, often by visually interpreting spatial data representing environmental or regulatory variables. Frequently, many variables influence the decision-making process, and modeling can improve results with even a small investment of time and effort. Presented are several grid-based GIS modeling projects, including: (1) land use optimization under environmental and regulatory constraints; (2) identification of suitable wetland mitigation sites; and (3) predictive mapping of prehistoric cultural resource sites. As different as the applications are, each follows a similar process of problem conceptualization, implementation of a practical grid-based GIS model, and evaluation of results.

  1. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  2. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  3. Comparison of a Conceptual Groundwater Model and Physically Based Groundwater Mode

    Science.gov (United States)

    Yang, J.; Zammit, C.; Griffiths, J.; Moore, C.; Woods, R. A.

    2017-12-01

    Groundwater is a vital resource for human activities including agricultural practice and urban water demand. Hydrologic modelling is an important way to study groundwater recharge, movement and discharge, and its response to both human activity and climate change. To understand the groundwater hydrologic processes nationally in New Zealand, we have developed a conceptually based groundwater flow model, which is fully integrated into a national surface-water model (TopNet), and able to simulate groundwater recharge, movement, and interaction with surface water. To demonstrate the capability of this groundwater model (TopNet-GW), we applied the model to an irrigated area with water shortage and pollution problems in the upper Ruamahanga catchment in Great Wellington Region, New Zealand, and compared its performance with a physically-based groundwater model (MODFLOW). The comparison includes river flow at flow gauging sites, and interaction between groundwater and river. Results showed that the TopNet-GW produced similar flow and groundwater interaction patterns as the MODFLOW model, but took less computation time. This shows the conceptually-based groundwater model has the potential to simulate national groundwater process, and could be used as a surrogate for the more physically based model.

  4. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    Science.gov (United States)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising

  5. A model-based framework for design of intensified enzyme-based processes

    DEFF Research Database (Denmark)

    Román-Martinez, Alicia

    This thesis presents a generic and systematic model-based framework to design intensified enzyme-based processes. The development of the presented methodology was motivated by the needs of the bio-based industry for a more systematic approach to achieve intensification in its production plants...... in enzyme-based processes which have found significant application in the pharmaceutical, food, and renewable fuels sector. The framework uses model-based strategies for (bio)-chemical process design and optimization, including the use of a superstructure to generate all potential reaction......(s)-separation(s) options according to a desired performance criteria and a generic mathematical model represented by the superstructure to derive the specific models corresponding to a specific process option. In principle, three methods of intensification of bioprocess are considered in this thesis: 1. enzymatic one...

  6. Urban flood simulation based on the SWMM model

    Directory of Open Access Journals (Sweden)

    L. Jiang

    2015-05-01

    Full Text Available China is the nation with the fastest urbanization in the past decades which has caused serious urban flooding. Flood forecasting is regarded as one of the important flood mitigation methods, and is widely used in catchment flood mitigation, but is not widely used in urban flooding mitigation. This paper, employing the SWMM model, one of the widely used urban flood planning and management models, simulates the urban flooding of Dongguan City in the rapidly urbanized southern China. SWMM is first set up based on the DEM, digital map and underground pipeline network, then parameters are derived based on the properties of the subcatchment and the storm sewer conduits; the parameter sensitivity analysis shows the parameter robustness. The simulated results show that with the 1-year return period precipitation, the studied area will have no flooding, but for the 2-, 5-, 10- and 20-year return period precipitation, the studied area will be inundated. The results show the SWMM model is promising for urban flood forecasting, but as it has no surface runoff routing, the urban flooding could not be forecast precisely.

  7. Hot News Recommendation System from Heterogeneous Websites Based on Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhengyou Xia

    2014-01-01

    Full Text Available The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs. In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  8. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  9. Integrating Biodiversity into Biosphere-Atmosphere Interactions Using Individual-Based Models (IBM)

    Science.gov (United States)

    Wang, B.; Shugart, H. H., Jr.; Lerdau, M.

    2017-12-01

    A key component regulating complex, nonlinear, and dynamic biosphere-atmosphere interactions is the inherent diversity of biological systems. The model frameworks currently widely used, i.e., Plant Functional Type models) do not even begin to capture the metabolic and taxonomic diversity found in many terrestrial systems. We propose that a transition from PFT-based to individual-based modeling approaches (hereafter referred to as IBM) is essential for integrating biodiversity into research on biosphere-atmosphere interactions. The proposal emerges from our studying the interactions of forests with atmospheric processes in the context of climate change using an individual-based forest volatile organic compounds model, UVAFME-VOC. This individual-based model can explicitly simulate VOC emissions based on an explicit modelling of forest dynamics by computing the growth, death, and regeneration of each individual tree of different species and their competition for light, moisture, and nutrient, from which system-level VOC emissions are simulated by explicitly computing and summing up each individual's emissions. We found that elevated O3 significantly altered the forest dynamics by favoring species that are O3-resistant, which, meanwhile, are producers of isoprene. Such compositional changes, on the one hand, resulted in unsuppressed forest productivity and carbon stock because of the compensation by O3-resistant species. On the other hand, with more isoprene produced arising from increased producers, a possible positive feedback loop between tropospheric O3 and forest thereby emerged. We also found that climate warming will not always stimulate isoprene emissions because warming simultaneously reduces isoprene emissions by causing a decline in the abundance of isoprene-emitting species. These results suggest that species diversity is of great significance and that individual-based modelling strategies should be applied in studying biosphere-atmosphere interactions.

  10. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  11. Closed Loop Brain Model of Neocortical Information Based Exchange

    Directory of Open Access Journals (Sweden)

    James eKozloski

    2016-01-01

    Full Text Available Here we describe an information based exchange' model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel Grand Loop through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components towards ever greater complexity and biological realism.

  12. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  13. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  14. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  15. NoSQL Based 3D City Model Management System

    Science.gov (United States)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  16. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  17. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    Science.gov (United States)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended

  18. Application of Z-Number Based Modeling in Psychological Research

    Directory of Open Access Journals (Sweden)

    Rafik Aliev

    2015-01-01

    Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.

  19. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  20. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  1. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  2. Modified hyperbolic sine model for titanium dioxide-based memristive thin films

    Science.gov (United States)

    Abu Bakar, Raudah; Syahirah Kamarozaman, Nur; Fazlida Hanim Abdullah, Wan; Herman, Sukreen Hana

    2018-03-01

    Since the emergence of memristor as the newest fundamental circuit elements, studies on memristor modeling have been evolved. To date, the developed models were based on the linear model, linear ionic drift model using different window functions, tunnelling barrier model and hyperbolic-sine function based model. Although using hyperbolic-sine function model could predict the memristor electrical properties, the model was not well fitted to the experimental data. In order to improve the performance of the hyperbolic-sine function model, the state variable equation was modified. On the one hand, the addition of window function cannot provide an improved fitting. By multiplying the Yakopcic’s state variable model to Chang’s model on the other hand resulted in the closer agreement with the TiO2 thin film experimental data. The percentage error was approximately 2.15%.

  3. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  4. Coast-down model based on rated parameters of reactor coolant pump

    International Nuclear Information System (INIS)

    Jiang Maohua; Zou Zhichao; Wang Pengfei; Ruan Xiaodong

    2014-01-01

    For a sudden loss of power in reactor coolant pump (RCP), a calculation model of rotor speed and flow characteristics based on rated parameters was studied. The derived model was verified by comparing with the power-off experimental data of 100D RCP. The results indicate that it can be used in preliminary design calculation and verification analysis. Then a design criterion of RCP was described based on the calculation model. The moment of inertia in AP1000 RCP was verified by this criterion. (authors)

  5. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  6. SAT-MAP-CLIMATE project results[SATellite base bio-geophysical parameter MAPping and aggregation modelling for CLIMATE models

    Energy Technology Data Exchange (ETDEWEB)

    Bay Hasager, C.; Woetmann Nielsen, N.; Soegaard, H.; Boegh, E.; Hesselbjerg Christensen, J.; Jensen, N.O.; Schultz Rasmussen, M.; Astrup, P.; Dellwik, E.

    2002-08-01

    Earth Observation (EO) data from imaging satellites are analysed with respect to albedo, land and sea surface temperatures, land cover types and vegetation parameters such as the Normalized Difference Vegetation Index (NDVI) and the leaf area index (LAI). The observed parameters are used in the DMI-HIRLAM-D05 weather prediction model in order to improve the forecasting. The effect of introducing actual sea surface temperatures from NOAA AVHHR compared to climatological mean values, shows a more pronounced land-sea breeze effect which is also observable in field observations. The albedo maps from NOAA AVHRR are rather similar to the climatological mean values so for the HIRLAM model this is insignicant, yet most likely of some importance in the HIRHAM regional climate model. Land cover type maps are assigned local roughness values determined from meteorological field observations. Only maps with a spatial resolution around 25 m can adequately map the roughness variations of the typical patch size distribution in Denmark. A roughness map covering Denmark is aggregated (ie area-average non-linearly) by a microscale aggregation model that takes the non-linear turbulent responses of each roughness step change between patches in an arbitrary pattern into account. The effective roughnesses are calculated into a 15 km by 15 km grid for the HIRLAM model. The effect of hedgerows is included as an added roughness effect as a function of hedge density mapped from a digital vector map. Introducing the new effective roughness maps into the HIRLAM model appears to remedy on the seasonal wind speed bias over land and sea in spring. A new parameterisation on the effective roughness for scalar surface fluxes is developed and tested on synthetic data. Further is a method for the estimation the evapotranspiration from albedo, surface temperatures and NDVI succesfully compared to field observations. The HIRLAM predictions of water vapour at 12 GMT are used for atmospheric correction of

  7. Firm Based Trade Models and Turkish Economy

    Directory of Open Access Journals (Sweden)

    Nilüfer ARGIN

    2015-12-01

    Full Text Available Among all international trade models, only The Firm Based Trade Models explains firm’s action and behavior in the world trade. The Firm Based Trade Models focuses on the trade behavior of individual firms that actually make intra industry trade. Firm Based Trade Models can explain globalization process truly. These approaches include multinational cooperation, supply chain and outsourcing also. Our paper aims to explain and analyze Turkish export with Firm Based Trade Models’ context. We use UNCTAD data on exports by SITC Rev 3 categorization to explain total export and 255 products and calculate intensive-extensive margins of Turkish firms.

  8. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  9. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  10. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  11. Based on user interest level of modeling scenarios and browse content

    Science.gov (United States)

    Zhao, Yang

    2017-08-01

    User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.

  12. Agent-Based Modeling of Day-Ahead Real Time Pricing in a Pool-Based Electricity Market

    Directory of Open Access Journals (Sweden)

    Sh. Yousefi

    2011-09-01

    Full Text Available In this paper, an agent-based structure of the electricity retail market is presented based on which day-ahead (DA energy procurement for customers is modeled. Here, we focus on operation of only one Retail Energy Provider (REP agent who purchases energy from DA pool-based wholesale market and offers DA real time tariffs to a group of its customers. As a model of customer response to the offered real time prices, an hourly acceptance function is proposed in order to represent the hourly changes in the customer’s effective demand according to the prices. Here, Q-learning (QL approach is applied in day-ahead real time pricing for the customers enabling the REP agent to discover which price yields the most benefit through a trial-and-error search. Numerical studies are presented based on New England day-ahead market data which include comparing the results of RTP based on QL approach with that of genetic-based pricing.

  13. Event-based soil loss models for construction sites

    Science.gov (United States)

    Trenouth, William R.; Gharabaghi, Bahram

    2015-05-01

    The elevated rates of soil erosion stemming from land clearing and grading activities during urban development, can result in excessive amounts of eroded sediments entering waterways and causing harm to the biota living therein. However, construction site event-based soil loss simulations - required for reliable design of erosion and sediment controls - are one of the most uncertain types of hydrologic models. This study presents models with improved degree of accuracy to advance the design of erosion and sediment controls for construction sites. The new models are developed using multiple linear regression (MLR) on event-based permutations of the Universal Soil Loss Equation (USLE) and artificial neural networks (ANN). These models were developed using surface runoff monitoring datasets obtained from three sites - Greensborough, Cookstown, and Alcona - in Ontario and datasets mined from the literature for three additional sites - Treynor, Iowa, Coshocton, Ohio and Cordoba, Spain. The predictive MLR and ANN models can serve as both diagnostic and design tools for the effective sizing of erosion and sediment controls on active construction sites, and can be used for dynamic scenario forecasting when considering rapidly changing land use conditions during various phases of construction.

  14. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data

  15. Stimulating Scientific Reasoning with Drawing-Based Modeling

    Science.gov (United States)

    Heijnes, Dewi; van Joolingen, Wouter; Leenaars, Frank

    2018-01-01

    We investigate the way students' reasoning about evolution can be supported by drawing-based modeling. We modified the drawing-based modeling tool SimSketch to allow for modeling evolutionary processes. In three iterations of development and testing, students in lower secondary education worked on creating an evolutionary model. After each…

  16. Modeling of VSC-Based Power Systems in The Extended Harmonic Domain

    DEFF Research Database (Denmark)

    Esparza, Miguel; Segundo-Ramirez, Juan; Kwon, Jun Bum

    2017-01-01

    Averaged modeling is a commonly used approach used to obtain mathematical representations of VSC-based systems. However, essential characteristics mainly related to the modulation process and the harmonic distortion of the signals are not able to be accurately captured and analyzed. The extended ...... on simulations and experimental case studies. The obtained results show that the resulting EHD models are accurate and reliable, while the memory and computation time are improved with the proposed model order reductions....

  17. Conceptual Framework for Agent-Based Modeling of Customer-Oriented Supply Networks

    OpenAIRE

    Solano-Vanegas , Clara ,; Carrillo-Ramos , Angela; Montoya-Torres , Jairo ,

    2015-01-01

    Part 3: Collaboration Frameworks; International audience; Supply Networks (SN) are complex systems involving the interaction of different actors, very often, with different objectives and goals. Among the different existing modeling approaches, agent-based systems can properly represent the autonomous behavior of SN links and, simultaneously, observe the general response of the system as a result of individual actions. Most of research using agent-based modeling in SN focuses on production is...

  18. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  19. The human body metabolism process mathematical simulation based on Lotka-Volterra model

    Science.gov (United States)

    Oliynyk, Andriy; Oliynyk, Eugene; Pyptiuk, Olexandr; DzierŻak, RóŻa; Szatkowska, Małgorzata; Uvaysova, Svetlana; Kozbekova, Ainur

    2017-08-01

    The mathematical model of metabolism process in human organism based on Lotka-Volterra model has beeng proposed, considering healing regime, nutrition system, features of insulin and sugar fragmentation process in the organism. The numerical algorithm of the model using IV-order Runge-Kutta method has been realized. After the result of calculations the conclusions have been made, recommendations about using the modeling results have been showed, the vectors of the following researches are defined.

  20. Route Assessment for Unmanned Aerial Vehicle Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Xixia Sun

    2014-01-01

    Full Text Available An integrated route assessment approach based on cloud model is proposed in this paper, where various sources of uncertainties are well kept and modeled by cloud theory. Firstly, a systemic criteria framework incorporating models for scoring subcriteria is developed. Then, the cloud model is introduced to represent linguistic variables, and survivability probability histogram of each route is converted into normal clouds by cloud transformation, enabling both randomness and fuzziness in the assessment environment to be managed simultaneously. Finally, a new way to measure the similarity between two normal clouds satisfying reflexivity, symmetry, transitivity, and overlapping is proposed. Experimental results demonstrate that the proposed route assessment approach outperforms fuzzy logic based assessment approach with regard to feasibility, reliability, and consistency with human thinking.

  1. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  2. Grid based calibration of SWAT hydrological models

    Directory of Open Access Journals (Sweden)

    D. Gorgan

    2012-07-01

    Full Text Available The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool, developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  3. Agent-based modeling of the energy network for hybrid cars

    International Nuclear Information System (INIS)

    Gonzalez de Durana, José María; Barambones, Oscar; Kremers, Enrique; Varga, Liz

    2015-01-01

    Highlights: • An approach to represent and calculate multicarrier energy networks has been developed. • It provides a modeling method based on agents, for multicarrier energy networks. • It allows the system representation on a single sheet. • Energy flows circulating in the system can be observed dynamically during simulation. • The method is technology independent. - Abstract: Studies in complex energy networks devoted to the modeling of electrical power grids, were extended in previous work, where a computational multi-layered ontology, implemented using agent-based methods, was adopted. This structure is compatible with recently introduced Multiplex Networks which using Multi-linear Algebra generalize some of classical results for single-layer networks, to multilayer networks in steady state. Static results do not assist overly in understanding dynamic networks in which the values of the variables in the nodes and edges can change suddenly, driven by events, and even where new nodes or edges may appear or disappear, also because of other events. To address this gap, a computational agent-based model is developed to extend the multi-layer and multiplex approaches. In order to demonstrate the benefits of a dynamical extension, a model of the energy network in a hybrid car is presented as a case study

  4. Model-based security analysis of the German health card architecture.

    Science.gov (United States)

    Jürjens, J; Rumm, R

    2008-01-01

    Health-care information systems are particularly security-critical. In order to make these applications secure, the security analysis has to be an integral part of the system design and IT management process for such systems. This work presents the experiences and results from the security analysis of the system architecture of the German Health Card, by making use of an approach to model-based security engineering that is based on the UML extension UMLsec. The focus lies on the security mechanisms and security policies of the smart-card-based architecture which were analyzed using the UMLsec method and tools. Main results of the paper include a report on the employment of the UMLsec method in an industrial health information systems context as well as indications of its benefits and limitations. In particular, two potential security weaknesses were detected and countermeasures discussed. The results indicate that it can be feasible to apply a model-based security analysis using UMLsec to an industrial health information system like the German Health Card architecture, and that doing so can have concrete benefits (such as discovering potential weaknesses, and an increased confidence that no further vulnerabilities of the kind that were considered are present).

  5. An Agent-Based Model for Studying Child Maltreatment and Child Maltreatment Prevention

    Science.gov (United States)

    Hu, Xiaolin; Puddy, Richard W.

    This paper presents an agent-based model that simulates the dynamics of child maltreatment and child maltreatment prevention. The developed model follows the principles of complex systems science and explicitly models a community and its families with multi-level factors and interconnections across the social ecology. This makes it possible to experiment how different factors and prevention strategies can affect the rate of child maltreatment. We present the background of this work and give an overview of the agent-based model and show some simulation results.

  6. The development of macros program-based cognitive evaluation model via e-learning course mathematics in senior high school based on curriculum 2013

    Directory of Open Access Journals (Sweden)

    Djoko Purnomo

    2017-02-01

    Full Text Available The specific purpose of this research is: The implementation of the application of the learning tool with a form cognitive learning evaluation model based macros program via E-learning at High School grade X at july-december based on 2013 curriculum. The method used in this research followed the procedures is research and development by Borg and Gall [2]. In second year, population analysis has conducted at several universities in Semarang. The results of the research and application development of macro program-based cognitive evaluation model is effective which can be seen from (1 the student learning result is over KKM, (2 The student independency affects learning result positively, (3 the student learning a result by using macros program-based cognitive evaluation model is better than students class control. Based on the results above, the development of macros program-based cognitive evaluation model that have been tested have met quality standards according to Akker (1999. Large-scale testing includes operational phase of field testing and final product revision, i.e trials in the wider class that includes students in mathematics education major in several universities, they are the Universitas PGRI Semarang, Universitas Islam Sultan Agung and the Universitas Islam NegeriWalisongo Semarang. The positive responses is given by students at the Universitas PGRI Semarang, Universitas Islam Sultan Agung and the Universitas Islam NegeriWalisongo Semarang.

  7. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs

    Directory of Open Access Journals (Sweden)

    Chunxi Zhang

    2017-10-01

    Full Text Available Aiming to improve survey accuracy of Measurement While Drilling (MWD based on Fiber Optic Gyroscopes (FOGs in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR. In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.

  8. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.

    Science.gov (United States)

    Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu

    2017-10-17

    Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.

  9. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  10. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  11. Benefits of using customized instrumentation in total knee arthroplasty: results from an activity-based costing model.

    Science.gov (United States)

    Tibesku, Carsten O; Hofer, Pamela; Portegies, Wesley; Ruys, C J M; Fennema, Peter

    2013-03-01

    The growing demand for total knee arthroplasty (TKA) associated with the efforts to contain healthcare expenditure by advanced economies necessitates the use of economically effective technologies in TKA. The present analysis based on activity-based costing (ABC) model was carried out to estimate the economic value of patient-matched instrumentation (PMI) compared to standard surgical instrumentation in TKA. The costs of the two approaches, PMI and standard instrumentation in TKA, were determined by the use of ABC which measures the cost of a particular procedure by determining the activities involved and adding the cost of each activity. Improvement in productivity due to increased operating room (OR) turn-around times was determined and potential additional revenue to the hospital by the efficient utilization of gained OR time was estimated. Increased efficiency in the usage of OR and utilization of surgical trays were noted with patient-specific approach. Potential revenues to the hospital were estimated with the use of PMI by efficient utilization of time saved in OR. Additional revenues of 78,240 per year were estimated considering utilization of gained OR time to perform surgeries other than TKA. The analysis suggests that use of PMI in TKA is economically effective when compared to standard instrumentation.

  12. Ground-Based Telescope Parametric Cost Model

    Science.gov (United States)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  13. The EURAD model: Design and first results

    International Nuclear Information System (INIS)

    1989-01-01

    The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de

  14. Geochemical controls on shale groundwaters: Results of reaction path modeling

    International Nuclear Information System (INIS)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs

  15. Multiagent-Based Model For ESCM

    Directory of Open Access Journals (Sweden)

    Delia MARINCAS

    2011-01-01

    Full Text Available Web based applications for Supply Chain Management (SCM are now a necessity for every company in order to meet the increasing customer demands, to face the global competition and to make profit. Multiagent-based approach is appropriate for eSCM because it shows many of the characteristics a SCM system should have. For this reason, we have proposed a multiagent-based eSCM model which configures a virtual SC, automates the SC activities: selling, purchasing, manufacturing, planning, inventory, etc. This model will allow a better coordination of the supply chain network and will increase the effectiveness of Web and intel-ligent technologies employed in eSCM software.

  16. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  17. An internet graph model based on trade-off optimization

    Science.gov (United States)

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  18. Model-based thermal system design optimization for the James Webb Space Telescope

    Science.gov (United States)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  19. An acoustical model based monitoring network

    NARCIS (Netherlands)

    Wessels, P.W.; Basten, T.G.H.; Eerden, F.J.M. van der

    2010-01-01

    In this paper the approach for an acoustical model based monitoring network is demonstrated. This network is capable of reconstructing a noise map, based on the combination of measured sound levels and an acoustic model of the area. By pre-calculating the sound attenuation within the network the

  20. Drone Detection with Chirp‐Pulse Radar Based on Target Fluctuation Models

    Directory of Open Access Journals (Sweden)

    Byung‐Kwan Kim

    2018-04-01

    Full Text Available This paper presents a pulse radar system to detect drones based on a target fluctuation model, specifically the Swerling target model. Because drones are small atypical objects and are mainly composed of non‐conducting materials, their radar cross‐section value is low and fluctuating. Therefore, determining the target fluctuation model and applying a proper integration method are important. The proposed system is herein experimentally verified and the results are discussed. A prototype design of the pulse radar system is based on radar equations. It adopts three different pulse modes and a coherent pulse integration to ensure a high signal‐to‐noise ratio. Outdoor measurements are performed with a prototype radar system to detect Doppler frequencies from both the drone frame and blades. The results indicate that the drone frame and blades are detected within an instrumental maximum range. Additionally, the results show that the drone's frame and blades are close to the Swerling 3 and 4 target models, respectively. By the analysis of the Swerling target models, proper integration methods for detecting drones are verified and can thus contribute to increasing in detectability.

  1. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  2. A game theory-based trust measurement model for social networks.

    Science.gov (United States)

    Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong

    2016-01-01

    In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.

  3. Probability-based collaborative filtering model for predicting gene-disease associations.

    Science.gov (United States)

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  4. Review of Current Standard Model Results in ATLAS

    CERN Document Server

    Brandt, Gerhard; The ATLAS collaboration

    2018-01-01

    This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.

  5. Gender-related model for mobile-based learning

    Science.gov (United States)

    Simanjuntak, R. R.; Dewi, U. P.; Rifai, I.

    2018-03-01

    The study investigates gender influence on mobile-based learning. This case study of university students in Jakarta involved 235 students (128 male, 97 female). Results of this qualitative study showed 96% preference for mobile-based learning. A further 94% showed the needs for collaboration and authenticity for 92%. Hofstede’s cultural dimensions were used to identify the gender aspects of MALL. Preference for Masculinity (65%) was showed rather than Femininity (35%), even among the female respondents (70% of the population). Professions and professionalism received strongest preference (70%) while Individuality and Collectivism had equal preferences among students. Both female and male respondents requested Indulgence (84%) for mobile-based learning with more male respondents opted for Indulgence. The study provided a model for this gender sensitive mobile-based learning. Implications of implementing mobile-based learning as an ideal alternative for well-accommodated education are is also discussed.

  6. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Directory of Open Access Journals (Sweden)

    Alvaro G Gutiérrez

    Full Text Available Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S. The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area. We compared the responses of a young stand (YS, ca. 60 years-old and an old-growth forest (OG, >500 years-old in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  7. Increased drought impacts on temperate rainforests from southern South America: results of a process-based, dynamic forest model.

    Science.gov (United States)

    Gutiérrez, Alvaro G; Armesto, Juan J; Díaz, M Francisca; Huth, Andreas

    2014-01-01

    Increased droughts due to regional shifts in temperature and rainfall regimes are likely to affect forests in temperate regions in the coming decades. To assess their consequences for forest dynamics, we need predictive tools that couple hydrologic processes, soil moisture dynamics and plant productivity. Here, we developed and tested a dynamic forest model that predicts the hydrologic balance of North Patagonian rainforests on Chiloé Island, in temperate South America (42°S). The model incorporates the dynamic linkages between changing rainfall regimes, soil moisture and individual tree growth. Declining rainfall, as predicted for the study area, should mean up to 50% less summer rain by year 2100. We analysed forest responses to increased drought using the model proposed focusing on changes in evapotranspiration, soil moisture and forest structure (above-ground biomass and basal area). We compared the responses of a young stand (YS, ca. 60 years-old) and an old-growth forest (OG, >500 years-old) in the same area. Based on detailed field measurements of water fluxes, the model provides a reliable account of the hydrologic balance of these evergreen, broad-leaved rainforests. We found higher evapotranspiration in OG than YS under current climate. Increasing drought predicted for this century can reduce evapotranspiration by 15% in the OG compared to current values. Drier climate will alter forest structure, leading to decreases in above ground biomass by 27% of the current value in OG. The model presented here can be used to assess the potential impacts of climate change on forest hydrology and other threats of global change on future forests such as fragmentation, introduction of exotic tree species, and changes in fire regimes. Our study expands the applicability of forest dynamics models in remote and hitherto overlooked regions of the world, such as southern temperate rainforests.

  8. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  9. A novel unified dislocation density-based model for hot deformation behavior of a nickel-based superalloy under dynamic recrystallization conditions

    International Nuclear Information System (INIS)

    Lin, Y.C.; Wen, Dong-Xu; Chen, Xiao-Min; Chen, Ming-Song

    2016-01-01

    In this study, a novel unified dislocation density-based model is presented for characterizing hot deformation behaviors in a nickel-based superalloy under dynamic recrystallization (DRX) conditions. In the Kocks-Mecking model, a new softening item is proposed to represent the impacts of DRX behavior on dislocation density evolution. The grain size evolution and DRX kinetics are incorporated into the developed model. Material parameters of the developed model are calibrated by a derivative-free method of MATLAB software. Comparisons between experimental and predicted results confirm that the developed unified dislocation density-based model can nicely reproduce hot deformation behavior, DRX kinetics, and grain size evolution in wide scope of initial grain size, strain rate, and deformation temperature. Moreover, the developed unified dislocation density-based model is well employed to analyze the time-variant forming processes of the studied superalloy. (orig.)

  10. A novel unified dislocation density-based model for hot deformation behavior of a nickel-based superalloy under dynamic recrystallization conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Y.C. [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Light Alloy Research Institute of Central South University, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China); Wen, Dong-Xu; Chen, Xiao-Min [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); Chen, Ming-Song [Central South University, School of Mechanical and Electrical Engineering, Changsha (China); State Key Laboratory of High Performance Complex Manufacturing, Changsha (China)

    2016-09-15

    In this study, a novel unified dislocation density-based model is presented for characterizing hot deformation behaviors in a nickel-based superalloy under dynamic recrystallization (DRX) conditions. In the Kocks-Mecking model, a new softening item is proposed to represent the impacts of DRX behavior on dislocation density evolution. The grain size evolution and DRX kinetics are incorporated into the developed model. Material parameters of the developed model are calibrated by a derivative-free method of MATLAB software. Comparisons between experimental and predicted results confirm that the developed unified dislocation density-based model can nicely reproduce hot deformation behavior, DRX kinetics, and grain size evolution in wide scope of initial grain size, strain rate, and deformation temperature. Moreover, the developed unified dislocation density-based model is well employed to analyze the time-variant forming processes of the studied superalloy. (orig.)

  11. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  12. Ecosystem health pattern analysis of urban clusters based on emergy synthesis: Results and implication for management

    International Nuclear Information System (INIS)

    Su, Meirong; Fath, Brian D.; Yang, Zhifeng; Chen, Bin; Liu, Gengyuan

    2013-01-01

    The evaluation of ecosystem health in urban clusters will help establish effective management that promotes sustainable regional development. To standardize the application of emergy synthesis and set pair analysis (EM–SPA) in ecosystem health assessment, a procedure for using EM–SPA models was established in this paper by combining the ability of emergy synthesis to reflect health status from a biophysical perspective with the ability of set pair analysis to describe extensive relationships among different variables. Based on the EM–SPA model, the relative health levels of selected urban clusters and their related ecosystem health patterns were characterized. The health states of three typical Chinese urban clusters – Jing-Jin-Tang, Yangtze River Delta, and Pearl River Delta – were investigated using the model. The results showed that the health status of the Pearl River Delta was relatively good; the health for the Yangtze River Delta was poor. As for the specific health characteristics, the Pearl River Delta and Yangtze River Delta urban clusters were relatively strong in Vigor, Resilience, and Urban ecosystem service function maintenance, while the Jing-Jin-Tang was relatively strong in organizational structure and environmental impact. Guidelines for managing these different urban clusters were put forward based on the analysis of the results of this study. - Highlights: • The use of integrated emergy synthesis and set pair analysis model was standardized. • The integrated model was applied on the scale of an urban cluster. • Health patterns of different urban clusters were compared. • Policy suggestions were provided based on the health pattern analysis

  13. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  14. Hybrid attacks on model-based social recommender systems

    Science.gov (United States)

    Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao

    2017-10-01

    With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.

  15. Dynamics-Based Stranded-Crowd Model for Evacuation in Building Bottlenecks

    Directory of Open Access Journals (Sweden)

    Lidi Huang

    2013-01-01

    Full Text Available In high-density public buildings, it is difficult to evacuate. So in this paper, we propose a novel quantitative evacuation model to insure people’s safety and reduce the risk of crowding. We analyze the mechanism of arch-like clogging phenomena during evacuation and the influencing factors in emergency situations at bottleneck passages; then we design a model based on crowd dynamics and apply the model to a stadium example. The example is used to compare evacuation results of crowd density with different egress widths in stranded zones. The results show this model proposed can guide the safe and dangerous egress widths in performance design and can help evacuation routes to be selected and optimized.

  16. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  17. Urban traffic noise assessment by combining measurement and model results

    NARCIS (Netherlands)

    Eerden, F.J.M. van der; Graafland, F.; Wessels, P.W.; Basten, T.G.H.

    2013-01-01

    A model based monitoring system is applied on a local scale in an urban area to obtain a better understanding of the traffic noise situation. The system consists of a scalable sensor network and an engineering model. A better understanding is needed to take appropriate and cost efficient measures,

  18. Survival modeling for the estimation of transition probabilities in model-based economic evaluations in the absence of individual patient data: a tutorial.

    Science.gov (United States)

    Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J

    2014-02-01

    Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model

  19. Calculus domains modelled using an original bool algebra based on polygons

    Science.gov (United States)

    Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.

    2016-08-01

    Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.

  20. Development of a CANDU Moderator Analysis Model; Based on Coupled Solver

    International Nuclear Information System (INIS)

    Yoon, Churl; Park, Joo Hwan

    2006-01-01

    A CFD model for predicting the CANDU-6 moderator temperature has been developed for several years in KAERI, which is based on CFX-4. This analytic model(CFX4-CAMO) has some strength in the modeling of hydraulic resistance in the core region and in the treatment of heat source term in the energy equations. But the convergence difficulties and slow computing speed reveal to be the limitations of this model, because the CFX-4 code adapts a segregated solver to solve the governing equations with strong coupled-effect. Compared to CFX-4 using segregated solver, CFX-10 adapts high efficient and robust coupled-solver. Before December 2005 when CFX-10 was distributed, the previous version of CFX-10(CFX-5. series) also adapted coupled solver but didn't have any capability to apply porous media approaches correctly. In this study, the developed moderator analysis model based on CFX- 4 (CFX4-CAMO) is transformed into a new moderator analysis model based on CFX-10. The new model is examined and the results are compared to the former

  1. A comparative study of independent particle model based ...

    Indian Academy of Sciences (India)

    We find that among these three independent particle model based methods, the ss-VSCF method provides most accurate results in the thermal averages followed by t-SCF and the v-VSCF is the least accurate. However, the ss-VSCF is found to be computationally very expensive for the large molecules. The t-SCF gives ...

  2. Circulation-based Modeling of Gravity Currents

    Science.gov (United States)

    Meiburg, E. H.; Borden, Z.

    2013-05-01

    Atmospheric and oceanic flows driven by predominantly horizontal density differences, such as sea breezes, thunderstorm outflows, powder snow avalanches, and turbidity currents, are frequently modeled as gravity currents. Efforts to develop simplified models of such currents date back to von Karman (1940), who considered a two-dimensional gravity current in an inviscid, irrotational and infinitely deep ambient. Benjamin (1968) presented an alternative model, focusing on the inviscid, irrotational flow past a gravity current in a finite-depth channel. More recently, Shin et al. (2004) proposed a model for gravity currents generated by partial-depth lock releases, considering a control volume that encompasses both fronts. All of the above models, in addition to the conservation of mass and horizontal momentum, invoke Bernoulli's law along some specific streamline in the flow field, in order to obtain a closed system of equations that can be solved for the front velocity as function of the current height. More recent computational investigations based on the Navier-Stokes equations, on the other hand, reproduce the dynamics of gravity currents based on the conservation of mass and momentum alone. We propose that it should therefore be possible to formulate a fundamental gravity current model without invoking Bernoulli's law. The talk will show that the front velocity of gravity currents can indeed be predicted as a function of their height from mass and momentum considerations alone, by considering the evolution of interfacial vorticity. This approach does not require information on the pressure field and therefore avoids the need for an energy closure argument such as those invoked by the earlier models. Predictions by the new theory are shown to be in close agreement with direct numerical simulation results. References Von Karman, T. 1940 The engineer grapples with nonlinear problems, Bull. Am. Math Soc. 46, 615-683. Benjamin, T.B. 1968 Gravity currents and related

  3. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  4. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  5. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  6. Agent-based modeling as a tool for program design and evaluation.

    Science.gov (United States)

    Lawlor, Jennifer A; McGirr, Sara

    2017-12-01

    Recently, systems thinking and systems science approaches have gained popularity in the field of evaluation; however, there has been relatively little exploration of how evaluators could use quantitative tools to assist in the implementation of systems approaches therein. The purpose of this paper is to explore potential uses of one such quantitative tool, agent-based modeling, in evaluation practice. To this end, we define agent-based modeling and offer potential uses for it in typical evaluation activities, including: engaging stakeholders, selecting an intervention, modeling program theory, setting performance targets, and interpreting evaluation results. We provide demonstrative examples from published agent-based modeling efforts both inside and outside the field of evaluation for each of the evaluative activities discussed. We further describe potential pitfalls of this tool and offer cautions for evaluators who may chose to implement it in their practice. Finally, the article concludes with a discussion of the future of agent-based modeling in evaluation practice and a call for more formal exploration of this tool as well as other approaches to simulation modeling in the field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  8. Spatial organization of mesenchymal stem cells in vitro--results from a new individual cell-based model with podia.

    Directory of Open Access Journals (Sweden)

    Martin Hoffmann

    Full Text Available Therapeutic application of mesenchymal stem cells (MSC requires their extensive in vitro expansion. MSC in culture typically grow to confluence within a few weeks. They show spindle-shaped fibroblastoid morphology and align to each other in characteristic spatial patterns at high cell density. We present an individual cell-based model (IBM that is able to quantitatively describe the spatio-temporal organization of MSC in culture. Our model substantially improves on previous models by explicitly representing cell podia and their dynamics. It employs podia-generated forces for cell movement and adjusts cell behavior in response to cell density. At the same time, it is simple enough to simulate thousands of cells with reasonable computational effort. Experimental sheep MSC cultures were monitored under standard conditions. Automated image analysis was used to determine the location and orientation of individual cells. Our simulations quantitatively reproduced the observed growth dynamics and cell-cell alignment assuming cell density-dependent proliferation, migration, and morphology. In addition to cell growth on plain substrates our model captured cell alignment on micro-structured surfaces. We propose a specific surface micro-structure that according to our simulations can substantially enlarge cell culture harvest. The 'tool box' of cell migratory behavior newly introduced in this study significantly enhances the bandwidth of IBM. Our approach is capable of accommodating individual cell behavior and collective cell dynamics of a variety of cell types and tissues in computational systems biology.

  9. A continuum mechanics-based musculo-mechanical model for esophageal transport

    Science.gov (United States)

    Kou, Wenjun; Griffith, Boyce E.; Pandolfino, John E.; Kahrilas, Peter J.; Patankar, Neelesh A.

    2017-11-01

    In this work, we extend our previous esophageal transport model using an immersed boundary (IB) method with discrete fiber-based structural model, to one using a continuum mechanics-based model that is approximated based on finite elements (IB-FE). To deal with the leakage of flow when the Lagrangian mesh becomes coarser than the fluid mesh, we employ adaptive interaction quadrature points to deal with Lagrangian-Eulerian interaction equations based on a previous work (Griffith and Luo [1]). In particular, we introduce a new anisotropic adaptive interaction quadrature rule. The new rule permits us to vary the interaction quadrature points not only at each time-step and element but also at different orientations per element. This helps to avoid the leakage issue without sacrificing the computational efficiency and accuracy in dealing with the interaction equations. For the material model, we extend our previous fiber-based model to a continuum-based model. We present formulations for general fiber-reinforced material models in the IB-FE framework. The new material model can handle non-linear elasticity and fiber-matrix interactions, and thus permits us to consider more realistic material behavior of biological tissues. To validate our method, we first study a case in which a three-dimensional short tube is dilated. Results on the pressure-displacement relationship and the stress distribution matches very well with those obtained from the implicit FE method. We remark that in our IB-FE case, the three-dimensional tube undergoes a very large deformation and the Lagrangian mesh-size becomes about 6 times of Eulerian mesh-size in the circumferential orientation. To validate the performance of the method in handling fiber-matrix material models, we perform a second study on dilating a long fiber-reinforced tube. Errors are small when we compare numerical solutions with analytical solutions. The technique is then applied to the problem of esophageal transport. We use two

  10. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  11. Fiducial-based registration with a touchable region model.

    Science.gov (United States)

    Kim, Sungmin; Kazanzides, Peter

    2017-02-01

    Image-guided surgery requires registration between an image coordinate system and an intraoperative coordinate system that is typically referenced to a tracking device. In fiducial-based registration methods, this is achieved by localizing points (fiducials) in each coordinate system. Often, both localizations are performed manually, first by picking a fiducial point in the image and then by using a hand-held tracked pointer to physically touch the corresponding fiducial on the patient. These manual procedures introduce localization error that is user-dependent and can significantly decrease registration accuracy. Thus, there is a need for a registration method that is tolerant of imprecise fiducial localization in the preoperative and intraoperative phases. We propose the iterative closest touchable point (ICTP) registration framework, which uses model-based localization and a touchable region model. This method consists of three stages: (1) fiducial marker localization in image space, using a fiducial marker model, (2) initial registration with paired-point registration, and (3) fine registration based on the iterative closest point method. We perform phantom experiments with a fiducial marker design that is commonly used in neurosurgery. The results demonstrate that ICTP can provide accuracy improvements compared to the standard paired-point registration method that is widely used for surgical navigation and surgical robot systems, especially in cases where the surgeon introduces large localization errors. The results demonstrate that the proposed method can reduce the effect of the surgeon's localization performance on the accuracy of registration, thereby producing more consistent and less user-dependent registration outcomes.

  12. A physically-based constitutive model for SA508-III steel: Modeling and experimental verification

    International Nuclear Information System (INIS)

    Dong, Dingqian; Chen, Fei; Cui, Zhenshan

    2015-01-01

    Due to its good toughness and high weldability, SA508-III steel has been widely used in the components manufacturing of reactor pressure vessels (RPV) and steam generators (SG). In this study, the hot deformation behaviors of SA508-III steel are investigated by isothermal hot compression tests with forming temperature of (950–1250)°C and strain rate of (0.001–0.1)s −1 , and the corresponding flow stress curves are obtained. According to the experimental results, quantitative analysis of work hardening and dynamic softening behaviors is presented. The critical stress and critical strain for initiation of dynamic recrystallization are calculated by setting the second derivative of the third order polynomial. Based on the classical stress–dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of SA508-III steel. Comparisons between the predicted and measured flow stress indicate that the established physically-based constitutive model can accurately characterize the hot deformations for the steel. Furthermore, a successful numerical simulation of the industrial upsetting process is carried out by implementing the developed constitutive model into a commercial software, which evidences that the physically-based constitutive model is practical and promising to promote industrial forging process for nuclear components

  13. A physically-based constitutive model for SA508-III steel: Modeling and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Dingqian [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China); Chen, Fei, E-mail: feechn@gmail.com [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China); Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD (United Kingdom); Cui, Zhenshan, E-mail: cuizs@sjtu.edu.cn [National Die & Mold CAD Engineering Research Center, Shanghai Jiao Tong University, 1954 Huashan Rd., Shanghai 200030 (China)

    2015-05-14

    Due to its good toughness and high weldability, SA508-III steel has been widely used in the components manufacturing of reactor pressure vessels (RPV) and steam generators (SG). In this study, the hot deformation behaviors of SA508-III steel are investigated by isothermal hot compression tests with forming temperature of (950–1250)°C and strain rate of (0.001–0.1)s{sup −1}, and the corresponding flow stress curves are obtained. According to the experimental results, quantitative analysis of work hardening and dynamic softening behaviors is presented. The critical stress and critical strain for initiation of dynamic recrystallization are calculated by setting the second derivative of the third order polynomial. Based on the classical stress–dislocation relation and the kinetics of dynamic recrystallization, a two-stage constitutive model is developed to predict the flow stress of SA508-III steel. Comparisons between the predicted and measured flow stress indicate that the established physically-based constitutive model can accurately characterize the hot deformations for the steel. Furthermore, a successful numerical simulation of the industrial upsetting process is carried out by implementing the developed constitutive model into a commercial software, which evidences that the physically-based constitutive model is practical and promising to promote industrial forging process for nuclear components.

  14. Two Monthly Continuous Dynamic Model Based on Nash Bargaining Theory for Conflict Resolution in Reservoir System.

    Science.gov (United States)

    Homayounfar, Mehran; Zomorodian, Mehdi; Martinez, Christopher J; Lai, Sai Hin

    2015-01-01

    So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i) having a discrete nature; and (ii) working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance) of the state variable (water level in the reservoir) is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP), and a discrete stochastic dynamic game model (PSDNG). By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in water allocation

  15. Two Monthly Continuous Dynamic Model Based on Nash Bargaining Theory for Conflict Resolution in Reservoir System.

    Directory of Open Access Journals (Sweden)

    Mehran Homayounfar

    Full Text Available So far many optimization models based on Nash Bargaining Theory associated with reservoir operation have been developed. Most of them have aimed to provide practical and efficient solutions for water allocation in order to alleviate conflicts among water users. These models can be discussed from two viewpoints: (i having a discrete nature; and (ii working on an annual basis. Although discrete dynamic game models provide appropriate reservoir operator policies, their discretization of variables increases the run time and causes dimensionality problems. In this study, two monthly based non-discrete optimization models based on the Nash Bargaining Solution are developed for a reservoir system. In the first model, based on constrained state formulation, the first and second moments (mean and variance of the state variable (water level in the reservoir is calculated. Using moment equations as the constraint, the long-term utility of the reservoir manager and water users are optimized. The second model is a dynamic approach structured based on continuous state Markov decision models. The corresponding solution based on the collocation method is structured for a reservoir system. In this model, the reward function is defined based on the Nash Bargaining Solution. Indeed, it is used to yield equilibrium in every proper sub-game, thereby satisfying the Markov perfect equilibrium. Both approaches are applicable for water allocation in arid and semi-arid regions. A case study was carried out at the Zayandeh-Rud river basin located in central Iran to identify the effectiveness of the presented methods. The results are compared with the results of an annual form of dynamic game, a classical stochastic dynamic programming model (e.g. Bayesian Stochastic Dynamic Programming model, BSDP, and a discrete stochastic dynamic game model (PSDNG. By comparing the results of alternative methods, it is shown that both models are capable of tackling conflict issues in

  16. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  17. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  18. Results of an interactively coupled atmospheric chemistry - general circulation model. Comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Hein, R.; Dameris, M.; Schnadt, C. [and others

    2000-01-01

    An interactively coupled climate-chemistry model which enables a simultaneous treatment of meteorology and atmospheric chemistry and their feedbacks is presented. This is the first model, which interactively combines a general circulation model based on primitive equations with a rather complex model of stratospheric and tropospheric chemistry, and which is computational efficient enough to allow long-term integrations with currently available computer resources. The applied model version extends from the Earth's surface up to 10 hPa with a relatively high number (39) of vertical levels. We present the results of a present-day (1990) simulation and compare it to available observations. We focus on stratospheric dynamics and chemistry relevant to describe the stratospheric ozone layer. The current model version ECHAM4.L39(DLR)/CHEM can realistically reproduce stratospheric dynamics in the Arctic vortex region, including stratospheric warming events. This constitutes a major improvement compared to formerly applied model versions. However, apparent shortcomings in Antarctic circulation and temperatures persist. The seasonal and interannual variability of the ozone layer is simulated in accordance with observations. Activation and deactivation of chlorine in the polar stratospheric vortices and their interhemispheric differences are reproduced. The consideration of the chemistry feedback on dynamics results in an improved representation of the spatial distribution of stratospheric water vapor concentrations, i.e., the simulated meriodional water vapor gradient in the stratosphere is realistic. The present model version constitutes a powerful tool to investigate, for instance, the combined direct and indirect effects of anthropogenic trace gas emissions, and the future evolution of the ozone layer. (orig.)

  19. Means-End based Functional Modeling for Intelligent Control: Modeling and Experiments with an Industrial Heat Pump System

    DEFF Research Database (Denmark)

    Saleem, Arshad

    2007-01-01

    The purpose of this paper is to present a Multilevel Flow Model (MFM) of an industrial heat pump system and its use for diagnostic reasoning. MFM is functional modeling language supporting an explicit means-ends intelligent control strategy for large industrial process plants. The model is used...... in several diagnostic experiments analyzing different fault scenarios. The model and results of the experiments are explained and it is shown how MFM based intelligent modeling and automated reasoning can improve the fault diagnosis process significantly....

  20. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    Science.gov (United States)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  1. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  2. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  3. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    Science.gov (United States)

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  4. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. New borehole-derived results on temperatures at the base of the Fennoscandian ice sheet

    Science.gov (United States)

    Rath, Volker; Vogt, Christian; Mottaghy, Darius; Kukkonen, Ilmo; Tarasov, Lev

    2014-05-01

    During the last few years, a data base of deep boreholes (>1000 m )in the area of the Fennoscandian ice sheet has been collected, including boreholes from Russia, Poland, Finland, Sweden and Norway. All of these are supposed to have recorded local basal ice conditions during the last glacial cycle. However, at each of these sites we are confronted with particular problems of interpretation. Here, we will concentrate on two very deep boreholes, namely the Outokumpu ICDP borehole (OKU, ≡2500 m) and a set of boreholes of intermediate depth (up to 1300 m) in the immediate meighborhood of the Kola superdeep borehole SG3. In the first case, OKU, we have developed a strategy combining the use of a traditional variational inversion of thye Tikhonov type, with a MCMC approach for the exploration of the associated uncertainty. A wide distribution around the result of the variational approach was chosen, with a time dependent temporal correlation length reflecting the loss of resolution back in time. The results fit very well with region independent results from different proxies, multi-proxy reconstructions, and instrumental data. They also are consistent with surface temperatures derived from recent calibrated ice sheet models. The SAT-GST offset independently derived from shallow borehole observations in the area was a crucial step to obtain theses results. The second case, SG3, has been studied a long time, and no final result was obtained regarding the question whether the observed heat flow density profile is caused by paleoclimate, fluid flow, or both. Earlier studies, as well as forward modelling using the results of the aforementioned ice sheet model indicate that paleoclimate alone can not explain the observations. We tested the model derived from the set of shallow boreholes against the temperature log from the main superdeep SG3, which, in contrast to these, transects the main high-permeability zone. The comparison led to a favorable results, and is also

  6. Developing a Behavioral Model for Mobile Phone-Based Diabetes Interventions

    Science.gov (United States)

    Nundy, Shantanu; Dick, Jonathan J.; Solomon, Marla C.; Peek, Monica E.

    2013-01-01

    Objectives Behavioral models for mobile phone-based diabetes interventions are lacking. This study explores the potential mechanisms by which a text message-based diabetes program affected self-management among African-Americans. Methods We conducted in-depth, individual interviews among 18 African-American patients with type 2 diabetes who completed a 4-week text message-based diabetes program. Each interview was audio- taped, transcribed verbatim, and imported into Atlas.ti software. Coding was done iteratively. Emergent themes were mapped onto existing behavioral constructs and then used to develop a novel behavioral model for mobile phone-based diabetes self-management programs. Results The effects of the text message-based program went beyond automated reminders. The constant, daily communications reduced denial of diabetes and reinforced the importance of self-management (Rosenstock Health Belief Model). Responding positively to questions about self-management increased mastery experience (Bandura Self-Efficacy). Most surprisingly, participants perceived the automated program as a “friend” and “support group” that monitored and supported their self-management behaviors (Barrera Social Support). Conclusions A mobile phone-based diabetes program affected self-management through multiple behavioral constructs including health beliefs, self-efficacy, and social support. Practice implications: Disease management programs that utilize mobile technologies should be designed to leverage existing models of behavior change and can address barriers to self-management associated with health disparities. PMID:23063349

  7. SCS-CN based time-distributed sediment yield model

    Science.gov (United States)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  8. Turning green: Agent-based modeling of the adoption of dynamic electricity tariffs

    International Nuclear Information System (INIS)

    Kowalska-Pyzalska, Anna; Maciejowska, Katarzyna; Suszczyński, Karol; Sznajd-Weron, Katarzyna; Weron, Rafał

    2014-01-01

    Using an agent-based modeling approach we study the temporal dynamics of consumer opinions regarding switching to dynamic electricity tariffs and the actual decisions to switch. We assume that the decision to switch is based on the unanimity of τ past opinions. The resulting model offers a hypothetical, yet plausible explanation of why there is such a big discrepancy between consumer opinions, as measured by market surveys, and the actual participation in pilot programs and the adoption of dynamic tariffs. We argue that due to the high indifference level in today's retail electricity markets, customer opinions are very unstable and change frequently. The conducted simulation study shows that reducing the indifference level can result in narrowing the intention–behavior gap. A similar effect can be achieved by decreasing the decision time that a consumer takes to make a decision. - Highlights: • We propose an agent-based model to study the adoption of dynamic electricity tariffs. • The decision to change the tariff is based on the unanimity of τ past opinions. • The model explains why the empirically observed intention–behavior gap exists. • The adoption of dynamic tariffs is impossible due to the high level of indifference in today's societies. • Reducing the indifference level or decreasing the decision time can result in narrowing the gap

  9. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  10. Modeling the radiation transfer of discontinuous canopies: results for gap probability and single-scattering contribution

    Science.gov (United States)

    Zhao, Feng; Zou, Kai; Shang, Hong; Ji, Zheng; Zhao, Huijie; Huang, Wenjiang; Li, Cunjun

    2010-10-01

    In this paper we present an analytical model for the computation of radiation transfer of discontinuous vegetation canopies. Some initial results of gap probability and bidirectional gap probability of discontinuous vegetation canopies, which are important parameters determining the radiative environment of the canopies, are given and compared with a 3- D computer simulation model. In the model, negative exponential attenuation of light within individual plant canopies is assumed. Then the computation of gap probability is resolved by determining the entry points and exiting points of the ray with the individual plants via their equations in space. For the bidirectional gap probability, which determines the single-scattering contribution of the canopy, a gap statistical analysis based model was adopted to correct the dependence of gap probabilities for both solar and viewing directions. The model incorporates the structural characteristics, such as plant sizes, leaf size, row spacing, foliage density, planting density, leaf inclination distribution. Available experimental data are inadequate for a complete validation of the model. So it was evaluated with a three dimensional computer simulation model for 3D vegetative scenes, which shows good agreement between these two models' results. This model should be useful to the quantification of light interception and the modeling of bidirectional reflectance distributions of discontinuous canopies.

  11. Long-term safety assessment of trench-type surface repository at Chernobyl, Ukraine - computer model and comparison with results from simplified models

    International Nuclear Information System (INIS)

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-01-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)

  12. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    Science.gov (United States)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  13. Numerical analysis of splashing fluid using hybrid method of mesh-based and particle-based modelings

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu; Ogawara, Takuya; Kaneda, Takeshi; Maseguchi, Ryo

    2009-01-01

    In order to simulate splashing and scattering fluid behaviors, we developed a hybrid method of mesh-based model for large-scale continuum fluid and particle-based model for small-scale discrete fluid particles. As for the solver of the continuum fluid, we adopt the CIVA RefIned Multiphase SimulatiON (CRIMSON) code to evaluate two phase flow behaviors based on the recent computational fluid dynamics (CFD) techniques. The phase field model has been introduced to the CRIMSON in order to solve the problem of loosing phase interface sharpness in long-term calculation. As for the solver of the discrete fluid droplets, we applied the idea of Smoothed Particle Hydrodynamics (SPH) method. Both continuum fluid and discrete fluid interact each other through drag interaction force. We verified our method by applying it to a popular benchmark problem of collapse of water column problems, especially focusing on the splashing and scattering fluid behaviors after the column collided against the wall. We confirmed that the gross splashing and scattering behaviors were well reproduced by the introduction of particle model while the detailed behaviors of the particles were slightly different from the experimental results. (author)

  14. Modeling oil production based on symbolic regression

    International Nuclear Information System (INIS)

    Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju

    2015-01-01

    Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak

  15. A CN-Based Ensembled Hydrological Model for Enhanced Watershed Runoff Prediction

    Directory of Open Access Journals (Sweden)

    Muhammad Ajmal

    2016-01-01

    Full Text Available A major structural inconsistency of the traditional curve number (CN model is its dependence on an unstable fixed initial abstraction, which normally results in sudden jumps in runoff estimation. Likewise, the lack of pre-storm soil moisture accounting (PSMA procedure is another inherent limitation of the model. To circumvent those problems, we used a variable initial abstraction after ensembling the traditional CN model and a French four-parameter (GR4J model to better quantify direct runoff from ungauged watersheds. To mimic the natural rainfall-runoff transformation at the watershed scale, our new parameterization designates intrinsic parameters and uses a simple structure. It exhibited more accurate and consistent results than earlier methods in evaluating data from 39 forest-dominated watersheds, both for small and large watersheds. In addition, based on different performance evaluation indicators, the runoff reproduction results show that the proposed model produced more consistent results for dry, normal, and wet watershed conditions than the other models used in this study.

  16. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  17. Exploratory modeling and simulation to support development of motesanib in Asian patients with non-small cell lung cancer based on MONET1 study results.

    Science.gov (United States)

    Claret, L; Bruno, R; Lu, J-F; Sun, Y-N; Hsu, C-P

    2014-04-01

    The motesanib phase III MONET1 study failed to show improvement in overall survival (OS) in non-small cell lung cancer, but a subpopulation of Asian patients had a favorable outcome. We performed exploratory modeling and simulations based on MONET1 data to support further development of motesanib in Asian patients. A model-based estimate of time to tumor growth was the best of tested tumor size response metrics in a multivariate OS model (P Simulations indicated that a phase III study in 500 Asian patients would exceed 80% power to confirm superior efficacy of motesanib combination therapy (expected HR: 0.74), suggesting that motesanib combination therapy may benefit Asian patients.

  18. A Probabilistic Short-Term Water Demand Forecasting Model Based on the Markov Chain

    Directory of Open Access Journals (Sweden)

    Francesca Gagliardi

    2017-07-01

    Full Text Available This paper proposes a short-term water demand forecasting method based on the use of the Markov chain. This method provides estimates of future demands by calculating probabilities that the future demand value will fall within pre-assigned intervals covering the expected total variability. More specifically, two models based on homogeneous and non-homogeneous Markov chains were developed and presented. These models, together with two benchmark models (based on artificial neural network and naïve methods, were applied to three real-life case studies for the purpose of forecasting the respective water demands from 1 to 24 h ahead. The results obtained show that the model based on a homogeneous Markov chain provides more accurate short-term forecasts than the one based on a non-homogeneous Markov chain, which is in line with the artificial neural network model. Both Markov chain models enable probabilistic information regarding the stochastic demand forecast to be easily obtained.

  19. A Day-to-Day Route Choice Model Based on Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Fangfang Wei

    2014-01-01

    Full Text Available Day-to-day traffic dynamics are generated by individual traveler’s route choice and route adjustment behaviors, which are appropriate to be researched by using agent-based model and learning theory. In this paper, we propose a day-to-day route choice model based on reinforcement learning and multiagent simulation. Travelers’ memory, learning rate, and experience cognition are taken into account. Then the model is verified and analyzed. Results show that the network flow can converge to user equilibrium (UE if travelers can remember all the travel time they have experienced, but which is not necessarily the case under limited memory; learning rate can strengthen the flow fluctuation, but memory leads to the contrary side; moreover, high learning rate results in the cyclical oscillation during the process of flow evolution. Finally, both the scenarios of link capacity degradation and random link capacity are used to illustrate the model’s applications. Analyses and applications of our model demonstrate the model is reasonable and useful for studying the day-to-day traffic dynamics.

  20. Graphene-based THz modulator analyzed by equivalent circuit model

    DEFF Research Database (Denmark)

    Xiao, Binggang; Chen, Jing; Xie, Zhiyi

    2016-01-01

    A terahertz (THz) modulator based on graphene is proposed and analysed by use of equivalent transmission line of a homogeneous mediumand the local anisotropic model of the graphene conductivity. The result calculated by the equivalent circuit is consistent with that obtained byFresnel transfer...