WorldWideScience

Sample records for modeling approach results

  1. A chemical energy approach of avascular tumor growth: multiscale modeling and qualitative results.

    Science.gov (United States)

    Ampatzoglou, Pantelis; Dassios, George; Hadjinicolaou, Maria; Kourea, Helen P; Vrahatis, Michael N

    2015-01-01

    In the present manuscript we propose a lattice free multiscale model for avascular tumor growth that takes into account the biochemical environment, mitosis, necrosis, cellular signaling and cellular mechanics. This model extends analogous approaches by assuming a function that incorporates the biochemical energy level of the tumor cells and a mechanism that simulates the behavior of cancer stem cells. Numerical simulations of the model are used to investigate the morphology of the tumor at the avascular phase. The obtained results show similar characteristics with those observed in clinical data in the case of the Ductal Carcinoma In Situ (DCIS) of the breast.

  2. Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.

    Science.gov (United States)

    Sabatier, Rodolphe; Doyen, Luc; Tichit, Muriel

    2012-01-01

    Effects of agri-environment schemes (AES) on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.

  3. Action versus result-oriented schemes in a grassland agroecosystem: a dynamic modelling approach.

    Directory of Open Access Journals (Sweden)

    Rodolphe Sabatier

    Full Text Available Effects of agri-environment schemes (AES on biodiversity remain controversial. While most AES are action-oriented, result-oriented and habitat-oriented schemes have recently been proposed as a solution to improve AES efficiency. The objective of this study was to compare action-oriented, habitat-oriented and result-oriented schemes in terms of ecological and productive performance as well as in terms of management flexibility. We developed a dynamic modelling approach based on the viable control framework to carry out a long term assessment of the three schemes in a grassland agroecosystem. The model explicitly links grazed grassland dynamics to bird population dynamics. It is applied to lapwing conservation in wet grasslands in France. We ran the model to assess the three AES scenarios. The model revealed the grazing strategies respecting ecological and productive constraints specific to each scheme. Grazing strategies were assessed by both their ecological and productive performance. The viable control approach made it possible to obtain the whole set of viable grazing strategies and therefore to quantify the management flexibility of the grassland agroecosystem. Our results showed that habitat and result-oriented scenarios led to much higher ecological performance than the action-oriented one. Differences in both ecological and productive performance between the habitat and result-oriented scenarios were limited. Flexibility of the grassland agroecosystem in the result-oriented scenario was much higher than in that of habitat-oriented scenario. Our model confirms the higher flexibility as well as the better ecological and productive performance of result-oriented schemes. A larger use of result-oriented schemes in conservation may also allow farmers to adapt their management to local conditions and to climatic variations.

  4. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    Science.gov (United States)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  5. Investigation of sonar transponders for offshore wind farms: modeling approach, experimental setup, and results.

    Science.gov (United States)

    Fricke, Moritz B; Rolfes, Raimund

    2013-11-01

    The installation of offshore wind farms in the German Exclusive Economic Zone requires the deployment of sonar transponders to prevent collisions with submarines. The general requirements for these systems have been previously worked out by the Research Department for Underwater Acoustics and Marine Geophysics of the Bundeswehr. In this article, the major results of the research project "Investigation of Sonar Transponders for Offshore Wind Farms" are presented. For theoretical investigations a hybrid approach was implemented using the boundary element method to calculate the source directivity and a three-dimensional ray-tracing algorithm to estimate the transmission loss. The angle-dependence of the sound field as well as the weather-dependence of the transmission loss are compared to experimental results gathered at the offshore wind farm alpha ventus, located 45 km north of the island Borkum. While theoretical and experimental results are in general agreement, the implemented model slightly underestimates scattering at the rough sea surface. It is found that the source level of 200 dB re 1 μPa at 1 m is adequate to satisfy the detectability of the warning sequence at distances up to 2 NM (≈3.7 km) within a horizontal sector of ±60° if realistic assumptions about signal-processing and noise are made. An arrangement to enlarge the angular coverage is discussed.

  6. Spatialised fate factors for nitrate in catchments: modelling approach and implication for LCA results.

    Science.gov (United States)

    Basset-Mens, Claudine; Anibar, Lamiaa; Durand, Patrick; van der Werf, Hayo M G

    2006-08-15

    The challenge for environmental assessment tools, such as Life Cycle Assessment (LCA) is to provide a holistic picture of the environmental impacts of a given system, while being relevant both at a global scale, i.e., for global impact categories such as climate change, and at a smaller scale, i.e., for regional impact categories such as aquatic eutrophication. To this end, the environmental mechanisms between emission and impact should be taken into account. For eutrophication in particular, which is one of the main impacts of farming systems, the fate factor of eutrophying pollutants in catchments, and particularly of nitrate, reflects one of these important and complex environmental mechanisms. We define this fate factor as: the ratio of the amount of nitrate at the outlet of the catchment over the nitrate emitted from the catchment's soils. In LCA, this fate factor is most often assumed equal to 1, while the observed fate factor is generally less than 1. A generic approach for estimating the range of variation of nitrate fate factors in a region of intensive agriculture was proposed. This approach was based on the analysis of different catchment scenarios combining different catchment types and different effective rainfalls. The evolution over time of the nitrate fate factor as well as the steady state fate factor for each catchment scenario was obtained using the INCA simulation model. In line with the general LCA model, the implications of the steady state fate factors for nitrate were investigated for the eutrophication impact result in the framework of an LCA of pig production. A sensitivity analysis to the fraction of nitrate lost as N(2)O was presented for the climate change impact category. This study highlighted the difference between the observed fate factor at a given time, which aggregates both storage and transformation processes and a "steady state fate factor", specific to the system considered. The range of steady state fate factors obtained for

  7. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall

    2016-01-01

    Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973

  8. THE SYSTEM APPROACH TO MEASURING CHANNEL MODELLING AS THE MECHANISM OF MAINTENANCE OF TRUST TO RESULTS OF MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    P. S. Serenkov

    2012-01-01

    Full Text Available Necessity of system approach development to measurement modeling for the purpose of maintenance of the trust set level to their results is proved. The decision of a measuring problem subject to determined aim is considered as creation of models sequence: measurement process model and complex measuring channel model. As a demonstrative basis of maintenance of trust to result of measurements the complex of criteria of completeness and irredundant is formulated.

  9. Lattice Hamiltonian approach to the Schwinger model. Further results from the strong coupling expansion

    Energy Technology Data Exchange (ETDEWEB)

    Szyniszewski, Marcin [Lancaster Univ. (United Kingdom). Dept. of Physics; Manchester Univ. (United Kingdom). NoWNano DTC; Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Frankfurt Univ., Frankfurt am Main (Germany). Inst. fuer Theortische Physik

    2014-10-15

    We employ exact diagonalization with strong coupling expansion to the massless and massive Schwinger model. New results are presented for the ground state energy and scalar mass gap in the massless model, which improve the precision to nearly 10{sup -9}%. We also investigate the chiral condensate and compare our calculations to previous results available in the literature. Oscillations of the chiral condensate which are present while increasing the expansion order are also studied and are shown to be directly linked to the presence of flux loops in the system.

  10. Improvement of Solar and Wind forecasting in southern Italy through a multi-model approach: preliminary results

    Science.gov (United States)

    Avolio, Elenio; Torcasio, Rosa Claudia; Lo Feudo, Teresa; Calidonna, Claudia Roberta; Contini, Daniele; Federico, Stefano

    2016-04-01

    The improvement of the Solar and Wind short-term forecasting represents a critical goal for the weather prediction community and is of great importance for a better estimation of power production from solar and wind farms. In this work we analyze the performance of two deterministic models operational at ISAC-CNR for the prediction of short-wave irradiance and wind speed, at two experimental sites in southern Italy. A post-processing technique, i.e the multi-model, is adopted to improve the performance of the two mesoscale models. The results show that the multi-model approach produces a significant error reduction with respect to the forecast of each model. The error is reduced up to 20 % of the model errors, depending on the parameter and forecasting time.

  11. Super-Droplet Approach to Simulate Precipitating Trade-Wind Cumuli - Comparison of Model Results with RICO Aircraft Observations

    CERN Document Server

    Arabas, Sylwester

    2012-01-01

    In this study we present a series of LES simulations employing the Super-Droplet Method (SDM) for representing aerosol, cloud and rain microphysics. SDM is a particle-based and probabilistic approach in which a Monte-Carlo type algorithm is used for solving the particle collisions and coalescence process. The model does not differentiate between aerosol particles, cloud droplets, drizzle or rain drops. Consequently, it covers representation of such cloud-microphysical processes as: CCN activation, drizzle formation by autoconversion, accretion of cloud droplets, self-collection of raindrops and precipitation including aerosol wet deposition. Among the salient features of the SDM, there are: (i) the robustness of the model formulation (i.e. employment of basic principles rather than parametrisations) and (ii) the ease of comparison of the model results with experimental data obtained with particle-counting instruments. The model set-up used in the study is based on observations from the Rain In Cumulus over Oc...

  12. Effects of land use changes on ecohydrological results in a mesoscale Chinese catchment using an integrated modelling approach

    Science.gov (United States)

    Schmalz, Britta; Kuemmerlen, Mathias; Jähnig, Sonja; Fohrer, Nicola

    2013-04-01

    Land use and climate change affect water resources worldwide. Driven by rapid economic development and a high population pressure, land use changes occur in China particularly fast causing environmental impacts at various spatial and temporal scales. An integrated modelling approach for depicting the effect of environmental changes on aquatic ecosystems has been developed and tested. Thereby, the catchment properties and the presence of aquatic organisms were closely linked. The Changjiang catchment in the Poyang lake area in China was selected as a test area. Two measuring and sampling campaigns were jointly planned and carried out by hydrologists and hydrobiologists in October 2010 and February / March 2011. At 50 sampling points benthic macroinvertebrates were collected using the multi-habitat sampling method. The water and sediment balance of the entire catchment area was modelled with the ecohydrological model SWAT (Soil and Water Assessment Tool). The SWAT results as discharge and sediment time series at each of the 50 sampling points were transfered to the species distribution model BIOMOD. BIOMOD linked the occurrence of a taxon (benthic macroinvertebrates) with environmental variables at the sampling points and calculated extrapolated occurrence probabilities for the study area. The results show species distributions of benthic macroinvertebrates in dependence on various hydrological, climatic and topographic variables. Variables, which are connected to the hydrology, determine a high proportion of the modelled occurrence of the selected taxa. This approach can also be used with changing hydrological conditions to depict the impact of environmental change on aquatic ecosystems. Various land use scenarios were developed for the Chinese study area. On the one hand, intensification of agriculture was assumed; on the other hand, an afforestation of agricultural land was calculated. The distributions of benthic macroinvertebrates resulting from the hydrological

  13. A quantitative comparison between the flow factor approach model and the molecular dynamics simulation results for the flow of a confined molecularly thin fluid film

    Science.gov (United States)

    Zhang, Yongbin

    2015-06-01

    Quantitative comparisons were made between the flow factor approach model and the molecular dynamics simulation (MDS) results both of which describe the flow of a molecularly thin fluid film confined between two solid walls. Although these two approaches, respectively, calculate the flow of a confined molecularly thin fluid film by different ways, very good agreements were found between them when the Couette and Poiseuille flows, respectively, calculated from them were compared. It strongly indicates the validity of the flow factor approach model in modeling the flow of a confined molecularly thin fluid film.

  14. Evaluating the improvements of the BOLAM meteorological model operational at ISPRA: A case study approach - preliminary results

    Science.gov (United States)

    Mariani, S.; Casaioli, M.; Lastoria, B.; Accadia, C.; Flavoni, S.

    2009-04-01

    Fritsch. A fully updated serial version of the BOLAM code has been recently acquired. Code improvements include a more precise advection scheme (Weighted Average Flux); explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes (also with possible choice among different available schemes). The operational implementation of the new code into the SIMM model chain, which requires the development of a parallel version, will be achieved during 2009. In view of this goal, the comparative verification of the different model versions' skill represents a fundamental task. On this purpose, it has been decided to evaluate the performance improvement of the new BOLAM code (in the available serial version, hereinafter BOLAM 2007) with respect to the version with the Kain-Fritsch scheme (hereinafter KF version) and to the older one employing the Kuo scheme (hereinafter Kuo version). In the present work, verification of precipitation forecasts from the three BOLAM versions is carried on in a case study approach. The intense rainfall episode occurred on 10th - 17th December 2008 over Italy has been considered. This event produced indeed severe damages in Rome and its surrounding areas. Objective and subjective verification methods have been employed in order to evaluate model performance against an observational dataset including rain gauge observations and satellite imagery. Subjective comparison of observed and forecast precipitation fields is suitable to give an overall description of the forecast quality. Spatial errors (e.g., shifting and pattern errors) and rainfall volume error can be assessed quantitatively by means of object-oriented methods. By comparing satellite images with model forecast fields, it is possible to investigate the differences between the evolution of the observed weather system and the predicted ones, and its sensitivity to the improvements in the model code

  15. Model-theoretic Optimization Approach to Triathlon Performance Under Comparative Static Conditions – Results Based on The Olympic Games 2012

    Directory of Open Access Journals (Sweden)

    Michael Fröhlich

    2013-10-01

    Full Text Available In Olympic-distance triathlon, time minimization is the goal in all three disciplines and the two transitions. Running is the key to winning, whereas swimming and cycling performance are less significantly associated with overall competition time. A comparative static simulation calculation based on the individual times of each discipline was done. Furthermore, the share of the discipline in the total time proved that increasing the scope of running training results in an additional performance development. Looking at the current development in triathlon and taking the Olympic Games in London 2012 as an initial basis for model-theoretic simulations of performance development, the first fact that attracts attention is that running becomes more and more the crucial variable in terms of winning a triathlon. Run times below 29:00 minutes in Olympic-distance triathlon will be decisive for winning. Currently, cycle training time is definitely overrepresented. The share of swimming is considered optimal.

  16. A cellular automaton based model simulating HVAC fluid and heat transport in a building. Modeling approach and comparison with experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Saiz, A. [Department of Applied Mathematics, Polytechnic University of Valencia, ETSGE School, Camino de Vera s/n, 46022 Valencia (Spain); Urchueguia, J.F. [Department of Applied Physics, Polytechnic University of Valencia, ETSII School, Camino de Vera s/n, 46022 Valencia (Spain); Martos, J. [Superior Technical School of Engineering, Department of Electronic Engineering, University of Valencia, Vicente Andres Estelles s/n, Burjassot 46100, Valencia (Spain)

    2010-09-15

    A discrete model characterizing heat and fluid flow in connection with thermal fluxes in a building is described and tested against experiment in this contribution. The model, based on a cellular automaton approach, relies on a set of a few quite simple rules and parameters in order to simulate the dynamic evolution of temperatures and energy flows in any water or brine based thermal energy distribution network in a building or system. Using an easy-to-record input, such as the instantaneous electrical power demand of the heating or cooling system, our model predicts time varying temperatures in characteristic spots and the related enthalpy flows whose simulation usually requires heavy computational tools and detailed knowledge of the network elements. As a particular example, we have applied our model to simulate an existing fan coil based hydronic heating system driven by a geothermal heat pump. When compared to the experimental temperature and thermal energy records, the outcome of the model coincides. (author)

  17. Reducing the item number to obtain the same-length self-assessment scales: a systematic approach using result of graphical loglinear rasch models

    DEFF Research Database (Denmark)

    Nielsen, Tine; Kreiner, Svend

    2011-01-01

    approach to item reduction based on results of graphical loglinear Rasch modeling (GLLRM) was designed. This approach was then used to reduce the number of items in the subscales of the R-D-LSI which had an item-length of more than seven items, thereby obtaining the Danish Self-Assessment Learning Styles......The Revised Danish Learning Styles Inventory (R-D-LSI) (Nielsen 2005), which is an adaptation of Sternberg- Wagner Thinking Styles Inventory (Sternberg, 1997), comprises 14 subscales, each measuring a separate learning style. Of these 14 subscales, 9 are eight items long and 5 are seven items long...

  18. Assimilated Tidal Results of Tide Gauge and TOPEX/POSEIDON Data over the China Seas Using a Variational Adjoint Approach with a Nonlinear Numerical Model

    Institute of Scientific and Technical Information of China (English)

    HAN Guijun; LI Wei; HE Zhongjie; LIU Kexiu; MA Jirui

    2006-01-01

    In order to obtain an accurate tide description in the China Seas, the 2-dimensional nonlinear numerical Princeton Ocean Model (POM) is employed to incorporate in situ tidal measurements both from tide gauges and TOPEX/POSEIDON (T/P) derived datasets by means of the variational adjoint approach in such a way that unknown internal model parameters, bottom topography, friction coefficients and open boundary conditions, for example, are adjusted during the process. The numerical model is used as a forward model. After the along-track T/P data are processed, two classical methods, i.e. harmonic and response analysis, are implemented to estimate the tide from such datasets with a domain covering the model area extending from 0° to 41°N in latitude and from 99°E to 142°E in longitude. And the results of these two methods are compared and interpreted. The numerical simulation is performed for 16 major constituents. In the data assimilation experiments, three types of unknown parameters (water depth, bottom friction and tidal open boundary conditions in the model equations) are chosen as control variables. Among the various types of data assimilation experiments, the calibration of water depth brings the most promising results. By comparing the results with selected tide gauge data, the average absolute errors are decreased from 7.9 cm to 6.8 cm for amplitude and from 13.0° to 9.0° for phase with respect to the semidiurnal tide M2 constituent, which is the largest tidal constituent in the model area. After the data assimilation experiment is performed, the comparison between model results and tide gauge observation for water levels shows that the RMS errors decrease by 9 cm for a total of 14 stations, mostly selected along the coast of Mainland China, when a one-month period is considered, and the correlation coefficients improve for most tidal stations among these stations.

  19. Testing new approaches to carbonate system simulation at the reef scale: the ReefSam model first results, application to a question in reef morphology and future challenges.

    Science.gov (United States)

    Barrett, Samuel; Webster, Jody

    2016-04-01

    Numerical simulation of the stratigraphy and sedimentology of carbonate systems (carbonate forward stratigraphic modelling - CFSM) provides significant insight into the understanding of both the physical nature of these systems and the processes which control their development. It also provides the opportunity to quantitatively test conceptual models concerning stratigraphy, sedimentology or geomorphology, and allows us to extend our knowledge either spatially (e.g. between bore holes) or temporally (forwards or backwards in time). The later is especially important in determining the likely future development of carbonate systems, particularly regarding the effects of climate change. This application, by its nature, requires successful simulation of carbonate systems on short time scales and at high spatial resolutions. Previous modelling attempts have typically focused on the scales of kilometers and kilo-years or greater (the scale of entire carbonate platforms), rather than at the scale of centuries or decades, and tens to hundreds of meters (the scale of individual reefs). Previous work has identified limitations in common approaches to simulating important reef processes. We present a new CFSM, Reef Sedimentary Accretion Model (ReefSAM), which is designed to test new approaches to simulating reef-scale processes, with the aim of being able to better simulate the past and future development of coral reefs. Four major features have been tested: 1. A simulation of wave based hydrodynamic energy with multiple simultaneous directions and intensities including wave refraction, interaction, and lateral sheltering. 2. Sediment transport simulated as sediment being moved from cell to cell in an iterative fashion until complete deposition. 3. A coral growth model including consideration of local wave energy and composition of the basement substrate (as well as depth). 4. A highly quantitative model testing approach where dozens of output parameters describing the reef

  20. A coupled model approach to reduce nonpoint-source pollution resulting from predicted urban growth: A case study in the Ambos Nogales watershed

    Science.gov (United States)

    Norman, L.M.; Guertin, D.P.; Feller, M.

    2008-01-01

    The development of new approaches for understanding processes of urban development and their environmental effects, as well as strategies for sustainable management, is essential in expanding metropolitan areas. This study illustrates the potential of linking urban growth and watershed models to identify problem areas and support long-term watershed planning. Sediment is a primary source of nonpoint-source pollution in surface waters. In urban areas, sediment is intermingled with other surface debris in transport. In an effort to forecast the effects of development on surface-water quality, changes predicted in urban areas by the SLEUTH urban growth model were applied in the context of erosion-sedimentation models (Universal Soil Loss Equation and Spatially Explicit Delivery Models). The models are used to simulate the effect of excluding hot-spot areas of erosion and sedimentation from future urban growth and to predict the impacts of alternative erosion-control scenarios. Ambos Nogales, meaning 'both Nogaleses,' is a name commonly used for the twin border cities of Nogales, Arizona and Nogales, Sonora, Mexico. The Ambos Nogales watershed has experienced a decrease in water quality as a result of urban development in the twin-city area. Population growth rates in Ambos Nogales are high and the resources set in place to accommodate the rapid population influx will soon become overburdened. Because of its remote location and binational governance, monitoring and planning across the border is compromised. One scenario described in this research portrays an improvement in water quality through the identification of high-risk areas using models that simulate their protection from development and replanting with native grasses, while permitting the predicted and inevitable growth elsewhere. This is meant to add to the body of knowledge about forecasting the impact potential of urbanization on sediment delivery to streams for sustainable development, which can be

  1. Complementing data-driven and physically-based approaches for predictive morphologic modeling: Results and implication from the Red River Basin, Vietnam

    Science.gov (United States)

    Schmitt, R. J.; Bernardi, D.; Bizzi, S.; Castelletti, A.; Soncini-Sessa, R.

    2013-12-01

    sediment balance over an extended time-horizon (>15 yrs.), upstream impoundments induce a much more rapid adaptation (1-5 yrs.). The applicability of the ANN as predictive model was evaluated by comparing its results with a traditional, 1D bed evolution model. The next decade's morphologic evolution under an ensemble of scenarios, considering uncertainties in climatic change, socio-economic development and upstream reservoir release policies was derived from both models. The ANN greatly outperforms the 1D model in computational requirements and presents a powerful tool for effective assessment of scenario ensembles and quantification of uncertainties in river hydro-morphology. In contrast, the processes-based model provides detailed, spatio-temporally distributed outputs and validation of the ANN's results for selected scenarios. We conclude that the application of both approaches constitutes a mutually enriching strategy for modern, quantitative catchment management. We argue that physically based modeling can have specific spatial and temporal constrains (e.g. in terms of identifying key drivers and associated temporal and spatial domains) and that linking physically-based with data-driven approaches largely increases the potential for including hydro-morphology into basin-scale water resource management.

  2. A model-based approach to adjust microwave observations for operational applications: results of a campaign at Munich Airport in winter 2011/2012

    Directory of Open Access Journals (Sweden)

    J. Güldner

    2013-10-01

    of model-based regression operators in order to provide unbiased vertical profiles during the campaign at Munich Airport. The results of this algorithm and the retrievals of a neural network, specially developed for the site, are compared with radiosondes from Oberschleißheim located about 10 km apart from the MWRP site. Outstanding deviations for the lowest levels between 50 and 100 m are discussed. Analogously to the airport experiment, a model-based regression operator was calculated for Lindenberg and compared with both radiosondes and operational results of observation-based methods. The bias of the retrievals could be considerably reduced and the accuracy, which has been assessed for the airport site, is quite similar to those of the operational radiometer site at Lindenberg above 1 km height. Additional investigations are made to determine the length of the training period necessary for generating best estimates. Thereby three months have proven to be adequate. The results of the study show that on the basis of numerical weather prediction (NWP model data, available everywhere at any time, the model-based regression method is capable of providing comparable results at a multitude of sites. Furthermore, the approach offers auspicious conditions for automation and continuous updating.

  3. Employment Effects of Renewable Energy Expansion on a Regional Level—First Results of a Model-Based Approach for Germany

    Directory of Open Access Journals (Sweden)

    Ulrike Lehr

    2012-02-01

    Full Text Available National studies have shown that both gross and net effects of the expansion of energy from renewable sources on employment are positive for Germany. These modeling approaches also revealed that this holds true for both present and future perspectives under certain assumptions on the development of exports, fossil fuel prices and national politics. Yet how are employment effects distributed within Germany? What components contribute to growth impacts on a regional level? To answer these questions (new methods of regionalization were explored and developed for the example “wind energy onshore” for Germany’s federal states. The main goal was to develop a methodology which is applicable to all renewable energy technologies in future research. For the quantification and projection, it was necessary to distinguish between jobs generated by domestic investments and exports on the one hand, and jobs for operation and maintenance of existing plants on the other hand. Further, direct and indirect employment is analyzed. The results show, that gross employment is particularly high in the northwestern regions of Germany. However, especially the indirect effects are spread out over the whole country. Regions in the south not only profit from the delivery of specific components, but also from other industry and service inputs.

  4. Interdisciplinary approach for improved esthetic results

    Directory of Open Access Journals (Sweden)

    G Sriram

    2014-01-01

    Full Text Available This clinical report describes an interdisciplinary (orthodontic, prosthodontics and operative dentist approach for the coordinated treatment of an adult patient diagnosed with severely mutilated dentition secondary to caries lesion warranting restorative procedures that was facilitated with orthodontic treatment. The patient′s specific esthetic expectation for the anterior teeth and improved smile were successfully met through planned treatment, including orthodontic tooth movement, restoration and porcelain conversion crowns. Such coordinated interdisciplinary evaluations and treatment are necessary for improved esthetics.

  5. Assimilation of Geosat Altimetric Data in a Nonlinear Shallow-Water Model of the Indian Ocean by Adjoint Approach. Part II: Some Validation and Interpretation of the Assimilated Results. Part 2; Some Validation and Interpretation of the Assimilated Results

    Science.gov (United States)

    Greiner, Eric; Perigaud, Claire

    1996-01-01

    This paper examines the results of assimilating Geosat sea level variations relative to the November 1986-November 1988 mean reference, in a nonlinear reduced-gravity model of the Indian Ocean, Data have been assimilated during one year starting in November 1986 with the objective of optimizing the initial conditions and the yearly averaged reference surface. The thermocline slope simulated by the model with or without assimilation is validated by comparison with the signal, which can be derived from expandable bathythermograph measurements performed in the Indian Ocean at that time. The topography simulated with assimilation on November 1986 is in very good agreement with the hydrographic data. The slopes corresponding to the South Equatorial Current and to the South Equatorial Countercurrent are better reproduced with assimilation than without during the first nine months. The whole circulation of the cyclonic gyre south of the equator is then strongly intensified by assimilation. Another assimilation experiment is run over the following year starting in November 1987. The difference between the two yearly mean surfaces simulated with assimilation is in excellent agreement with Geosat. In the southeastern Indian Ocean, the correction to the yearly mean dynamic topography due to assimilation over the second year is negatively correlated to the one the year before. This correction is also in agreement with hydrographic data. It is likely that the signal corrected by assimilation is not only due to wind error, because simulations driven by various wind forcings present the same features over the two years. Model simulations run with a prescribed throughflow transport anomaly indicate that assimilation is rather correcting in the interior of the model domain for inadequate boundary conditions with the Pacific.

  6. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  7. Modeling Malaysia's Energy System: Some Preliminary Results

    Directory of Open Access Journals (Sweden)

    Ahmad M. Yusof

    2011-01-01

    Full Text Available Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysia’s energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors. The integration to the economic sectors is done exogeneously by specifying the annual sectoral energy demand levels. The model in turn optimizes the energy variables for a specified objective function to meet those demands. Results: By minimizing the inter temporal petroleum product imports for the crude oil system the annual extraction level of Tapis blend is projected at 579600 barrels per day. The aggregate demand for petroleum products is projected to grow at 2.1% year-1 while motor gasoline and diesel constitute 42 and 38% of the petroleum products demands mix respectively over the 5 year planning period. Petroleum products import is expected to grow at 6.0% year-1. Conclusion: The preliminary results indicate that the model performs as expected. Thus other types of energy carriers such as natural gas, coal and biomass will be added to the energy system for the overall development of Malaysia energy model.

  8. Spiritual and Non-spiritual Needs Among German Soldiers and Their Relation to Stress Perception, PTDS Symptoms, and Life Satisfaction: Results from a Structural Equation Modeling Approach.

    Science.gov (United States)

    Büssing, Arndt; Recchia, Daniela R

    2016-06-01

    In an anonym cross-sectional survey (using standardized questionnaires) among 1092 German soldiers, we found that 21 % regard their faith as a "strong hold in difficult times." Only a few had specific religious needs. Rather, a consistent theme from the participants was the need to communicate their own fears, worries and desire to attain states of inner peace. "Soldiers" stress perception and posttraumatic stress disorder symptoms were associated particularly with existential and Inner Peace Needs. Structural equation modeling indicated that stress perception has a negative influence on soldiers' life satisfaction, which in turn gives rise to specific unmet spiritual needs. These specific needs may indicate psycho-emotional problems which could be supported very early to prevent health affections and service failure.

  9. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  10. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  11. Model Oriented Approach for Industrial Software Development

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2015-01-01

    Full Text Available The article considers the specifics of a model oriented approach to software development based on the usage of Model Driven Architecture (MDA, Model Driven Software Development (MDSD and Model Driven Development (MDD technologies. Benefits of this approach usage in the software development industry are described. The main emphasis is put on the system design, automated code generation for large systems, verification, proof of system properties and reduction of bug density. Drawbacks of the approach are also considered. The approach proposed in the article is specific for industrial software systems development. These systems are characterized by different levels of abstraction, which is used on modeling and code development phases. The approach allows to detail the model to the level of the system code, at the same time store the verified model semantics and provide the checking of the whole detailed model. Steps of translating abstract data structures (including transactions, signals and their parameters into data structures used in detailed system implementation are presented. Also the grammar of a language for specifying rules of abstract model data structures transformation into real system detailed data structures is described. The results of applying the proposed method in the industrial technology are shown.The article is published in the authors’ wording.

  12. Learning Action Models: Qualitative Approach

    NARCIS (Netherlands)

    Bolander, T.; Gierasimczuk, N.; van der Hoek, W.; Holliday, W.H.; Wang, W.-F.

    2015-01-01

    In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite

  13. Hydraulic Modeling of Lock Approaches

    Science.gov (United States)

    2016-08-01

    cation was that the guidewall design changed from a solid wall to one on pilings in which water was allowed to flow through and/or under the wall ...develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for the Army, the...magnitudes and directions at lock approaches for open river conditions. The meshes were developed using the Surface- water Modeling System. The two

  14. LP Approach to Statistical Modeling

    OpenAIRE

    Mukhopadhyay, Subhadeep; Parzen, Emanuel

    2014-01-01

    We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...

  15. One approach for Management by Objectives and Results in Scandinavia?

    DEFF Research Database (Denmark)

    Kristiansen, Mads Bøge

    2016-01-01

    Viewed from abroad, Denmark, Norway and Sweden look very similar. In the literature on public management reforms and performance management, these countries are frequently regarded as one, and the literature often refers to a specific Nordic or Scandinavian model. The aim of this paper...... is to empirically test the argument concerning the existence of one Nordic perspective on performance management. The paper presents a comparative study of Management by Objectives and Results (MBOR) in Prison and Probation Services, Food Safety, and Meteorology in Denmark, Norway and Sweden. The paper examines...... differences and similarities in the design and use of MBOR across the countries (within each of the different tasks), and within each of the three countries (across the three tasks). The paper finds that it is difficult to identify one Scandinavian approach to MBOR, as variations in MBOR are observed across...

  16. Approaches to Modeling of Recrystallization

    Directory of Open Access Journals (Sweden)

    Håkan Hallberg

    2011-10-01

    Full Text Available Control of the material microstructure in terms of the grain size is a key component in tailoring material properties of metals and alloys and in creating functionally graded materials. To exert this control, reliable and efficient modeling and simulation of the recrystallization process whereby the grain size evolves is vital. The present contribution is a review paper, summarizing the current status of various approaches to modeling grain refinement due to recrystallization. The underlying mechanisms of recrystallization are briefly recollected and different simulation methods are discussed. Analytical and empirical models, continuum mechanical models and discrete methods as well as phase field, vertex and level set models of recrystallization will be considered. Such numerical methods have been reviewed previously, but with the present focus on recrystallization modeling and with a rapidly increasing amount of related publications, an updated review is called for. Advantages and disadvantages of the different methods are discussed in terms of applicability, underlying assumptions, physical relevance, implementation issues and computational efficiency.

  17. Modeling Malaysia's Energy System: Some Preliminary Results

    OpenAIRE

    Ahmad M. Yusof

    2011-01-01

    Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining) through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors). The integration to the economic sectors is done exogene...

  18. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  19. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    —they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning......In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite...... identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power...

  20. Building Water Models, A Different Approach

    CERN Document Server

    Izadi, Saeed; Onufriev, Alexey V

    2014-01-01

    Simplified, classical models of water are an integral part of atomistic molecular simulations, especially in biology and chemistry where hydration effects are critical. Yet, despite several decades of effort, these models are still far from perfect. Presented here is an alternative approach to constructing point charge water models - currently, the most commonly used type. In contrast to the conventional approach, we do not impose any geometry constraints on the model other than symmetry. Instead, we optimize the distribution of point charges to best describe the "electrostatics" of the water molecule, which is key to many unusual properties of liquid water. The search for the optimal charge distribution is performed in 2D parameter space of key lowest multipole moments of the model, to find best fit to a small set of bulk water properties at room temperature. A virtually exhaustive search is enabled via analytical equations that relate the charge distribution to the multipole moments. The resulting "optimal"...

  1. Towards new approaches in phenological modelling

    Science.gov (United States)

    Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas

    2014-05-01

    Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.

  2. Appraisal of geodynamic inversion results: a data mining approach

    Science.gov (United States)

    Baumann, T. S.

    2016-11-01

    Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB

  3. New results in the quantum statistical approach to parton distributions

    CERN Document Server

    Soffer, Jacques; Bourrely, Claude

    2014-01-01

    We will describe the quantum statistical approach to parton distributions allowing to obtain simultaneously the unpolarized distributions and the helicity distributions. We will present some recent results, in particular related to the nucleon spin structure in QCD. Future measurements are challenging to check the validity of this novel physical framework.

  4. A Bayesian Shrinkage Approach for AMMI Models.

    Science.gov (United States)

    da Silva, Carlos Pereira; de Oliveira, Luciano Antonio; Nuvunga, Joel Jorge; Pamplona, Andrezza Kéllen Alves; Balestre, Marcio

    2015-01-01

    Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI) model, are widely applicable to genotype-by-environment interaction (GEI) studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05) in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct posterior

  5. A Bayesian Shrinkage Approach for AMMI Models.

    Directory of Open Access Journals (Sweden)

    Carlos Pereira da Silva

    Full Text Available Linear-bilinear models, especially the additive main effects and multiplicative interaction (AMMI model, are widely applicable to genotype-by-environment interaction (GEI studies in plant breeding programs. These models allow a parsimonious modeling of GE interactions, retaining a small number of principal components in the analysis. However, one aspect of the AMMI model that is still debated is the selection criteria for determining the number of multiplicative terms required to describe the GE interaction pattern. Shrinkage estimators have been proposed as selection criteria for the GE interaction components. In this study, a Bayesian approach was combined with the AMMI model with shrinkage estimators for the principal components. A total of 55 maize genotypes were evaluated in nine different environments using a complete blocks design with three replicates. The results show that the traditional Bayesian AMMI model produces low shrinkage of singular values but avoids the usual pitfalls in determining the credible intervals in the biplot. On the other hand, Bayesian shrinkage AMMI models have difficulty with the credible interval for model parameters, but produce stronger shrinkage of the principal components, converging to GE matrices that have more shrinkage than those obtained using mixed models. This characteristic allowed more parsimonious models to be chosen, and resulted in models being selected that were similar to those obtained by the Cornelius F-test (α = 0.05 in traditional AMMI models and cross validation based on leave-one-out. This characteristic allowed more parsimonious models to be chosen and more GEI pattern retained on the first two components. The resulting model chosen by posterior distribution of singular value was also similar to those produced by the cross-validation approach in traditional AMMI models. Our method enables the estimation of credible interval for AMMI biplot plus the choice of AMMI model based on direct

  6. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...

  7. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  8. Validation of Modeling Flow Approaching Navigation Locks

    Science.gov (United States)

    2013-08-01

    instrumentation, direction vernier . ........................................................................ 8  Figure 11. Plan A lock approach, upstream approach...13-9 8 Figure 9. Tools and instrumentation, bracket attached to rail. Figure 10. Tools and instrumentation, direction vernier . Numerical model

  9. Model Mapping Approach Based on Ontology Semantics

    Directory of Open Access Journals (Sweden)

    Jinkui Hou

    2013-09-01

    Full Text Available The mapping relations between different models are the foundation for model transformation in model-driven software development. On the basis of ontology semantics, model mappings between different levels are classified by using structural semantics of modeling languages. The general definition process for mapping relations is explored, and the principles of structure mapping are proposed subsequently. The approach is further illustrated by the mapping relations from class model of object oriented modeling language to the C programming codes. The application research shows that the approach provides a theoretical guidance for the realization of model mapping, and thus can make an effective support to model-driven software development

  10. Scientific Theories, Models and the Semantic Approach

    Directory of Open Access Journals (Sweden)

    Décio Krause

    2007-12-01

    Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.

  11. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  12. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  13. Hydraulic fracture model comparison study: Complete results

    Energy Technology Data Exchange (ETDEWEB)

    Warpinski, N.R. [Sandia National Labs., Albuquerque, NM (United States); Abou-Sayed, I.S. [Mobil Exploration and Production Services (United States); Moschovidis, Z. [Amoco Production Co. (US); Parker, C. [CONOCO (US)

    1993-02-01

    Large quantities of natural gas exist in low permeability reservoirs throughout the US. Characteristics of these reservoirs, however, make production difficult and often economic and stimulation is required. Because of the diversity of application, hydraulic fracture design models must be able to account for widely varying rock properties, reservoir properties, in situ stresses, fracturing fluids, and proppant loads. As a result, fracture simulation has emerged as a highly complex endeavor that must be able to describe many different physical processes. The objective of this study was to develop a comparative study of hydraulic-fracture simulators in order to provide stimulation engineers with the necessary information to make rational decisions on the type of models most suited for their needs. This report compares the fracture modeling results of twelve different simulators, some of them run in different modes for eight separate design cases. Comparisons of length, width, height, net pressure, maximum width at the wellbore, average width at the wellbore, and average width in the fracture have been made, both for the final geometry and as a function of time. For the models in this study, differences in fracture length, height and width are often greater than a factor of two. In addition, several comparisons of the same model with different options show a large variability in model output depending upon the options chosen. Two comparisons were made of the same model run by different companies; in both cases the agreement was good. 41 refs., 54 figs., 83 tabs.

  14. Nuclear level density: Shell-model approach

    Science.gov (United States)

    Sen'kov, Roman; Zelevinsky, Vladimir

    2016-06-01

    Knowledge of the nuclear level density is necessary for understanding various reactions, including those in the stellar environment. Usually the combinatorics of a Fermi gas plus pairing is used for finding the level density. Recently a practical algorithm avoiding diagonalization of huge matrices was developed for calculating the density of many-body nuclear energy levels with certain quantum numbers for a full shell-model Hamiltonian. The underlying physics is that of quantum chaos and intrinsic thermalization in a closed system of interacting particles. We briefly explain this algorithm and, when possible, demonstrate the agreement of the results with those derived from exact diagonalization. The resulting level density is much smoother than that coming from conventional mean-field combinatorics. We study the role of various components of residual interactions in the process of thermalization, stressing the influence of incoherent collision-like processes. The shell-model results for the traditionally used parameters are also compared with standard phenomenological approaches.

  15. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  16. Experimental Results on Statistical Approaches to Page Replacement Policies

    Energy Technology Data Exchange (ETDEWEB)

    LEUNG,VITUS J.; IRANI,SANDY

    2000-12-08

    This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions: Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt, we find that two previously suggested algorithms (Maximum Hitting Time and Dominating Distribution) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.

  17. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical

  18. Geometrical approach to fluid models

    NARCIS (Netherlands)

    Kuvshinov, B. N.; Schep, T. J.

    1997-01-01

    Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notio

  19. Model based feature fusion approach

    NARCIS (Netherlands)

    Schwering, P.B.W.

    2001-01-01

    In recent years different sensor data fusion approaches have been analyzed and evaluated in the field of mine detection. In various studies comparisons have been made between different techniques. Although claims can be made for advantages for using certain techniques, until now there has been no si

  20. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  1. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  2. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  3. Global energy modeling - A biophysical approach

    Energy Technology Data Exchange (ETDEWEB)

    Dale, Michael

    2010-09-15

    This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.

  4. Simulation Modeling of Radio Direction Finding Results

    Directory of Open Access Journals (Sweden)

    K. Pelikan

    1994-12-01

    Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.

  5. A POMDP approach to Affective Dialogue Modeling

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Poel, Mannes; Nijholt, Antinus; Zwiers, Jakob; Keller, E.; Marinaro, M.; Bratanic, M.

    2007-01-01

    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's

  6. The chronic diseases modelling approach

    NARCIS (Netherlands)

    Hoogenveen RT; Hollander AEM de; Genugten MLL van; CCM

    1998-01-01

    A mathematical model structure is described that can be used to simulate the changes of the Dutch public health state over time. The model is based on the concept of demographic and epidemiologic processes (events) and is mathematically based on the lifetable method. The population is divided over s

  7. Gradual approach to refinement of the nasal tip: surgical results

    Directory of Open Access Journals (Sweden)

    Thiago Bittencourt Ottoni de Carvalho

    2015-02-01

    Full Text Available Introduction: The complexity of the nasal tip structures and the impact of surgical maneuvers make the prediction of the final outcome very difficult. Therefore, no single technique is enough to correct the several anatomical presentations, and adequate preoperative planning represents the basis of rhinoplasty. Objective: To present results of rhinoplasty, through the gradual surgical approach to nasal tip definition based on anatomical features, and to evaluate the degree of patient satisfaction after the surgical procedure. Methods: Longitudinal retrospective cohort study of the medical charts of 533 patients of both genders who underwent rhinoplasty from January of 2005 to January of 2012 was performed. Cases were allocated into seven groups: (1 no surgery on nasal tip; (2 interdomal breakup; (3 cephalic trim; (4 domal suture; (5 shield-shaped graft; (6 vertical dome division; (7 replacement of lower lateral cartilages. Results: Group 4 was the most prevalent. The satisfaction rate was 96% and revision surgery occurred in 4% of cases. Conclusion: The protocol used allowed the implementation of a gradual surgical approach to nasal tip definition with the nasal anatomical characteristics, high rate of patient satisfaction with the surgical outcome, and low rate of revision.

  8. Learning Actions Models: Qualitative Approach

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2015-01-01

    identifiability (conclusively inferring the appropriate action model in finite time) and identifiability in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while non-deterministic actions require more learning power......—they are identifiable in the limit.We then move on to a particular learning method, which proceeds via restriction of a space of events within a learning-specific action model. This way of learning closely resembles the well-known update method from dynamic epistemic logic. We introduce several different learning...

  9. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  10. Relaxation dynamics of Sierpinski hexagon fractal polymer: Exact analytical results in the Rouse-type approach and numerical results in the Zimm-type approach

    Science.gov (United States)

    Jurjiu, Aurel; Galiceanu, Mircea; Farcasanu, Alexandru; Chiriac, Liviu; Turcu, Flaviu

    2016-12-01

    In this paper, we focus on the relaxation dynamics of Sierpinski hexagon fractal polymer. The relaxation dynamics of this fractal polymer is investigated in the framework of the generalized Gaussian structure model using both Rouse and Zimm approaches. In the Rouse-type approach, by performing real-space renormalization transformations, we determine analytically the complete eigenvalue spectrum of the connectivity matrix. Based on the eigenvalues obtained through iterative algebraic relations we calculate the averaged monomer displacement and the mechanical relaxation moduli (storage modulus and loss modulus). The evaluation of the dynamical properties in the Rouse-type approach reveals that they obey scaling in the intermediate time/frequency domain. In the Zimm-type approach, which includes the hydrodynamic interactions, the relaxation quantities do not show scaling. The theoretical findings with respect to scaling in the intermediate domain of the relaxation quantities are well supported by experimental results.

  11. A New Approach for Magneto-Static Hysteresis Behavioral Modeling

    DEFF Research Database (Denmark)

    Astorino, Antonio; Swaminathan, Madhavan; Antonini, Giulio

    2016-01-01

    In this paper, a new behavioral modeling approach for magneto-static hysteresis is presented. Many accurate models are currently available, but none of them seems to be able to correctly reproduce all the possible B-H paths with low computational cost. By contrast, the approach proposed...... achieved when comparing the measured and simulated results....

  12. Siberia Integrated Regional Study megaproject: approaches, first results and challenges

    Science.gov (United States)

    Gordov, E. P.; Vaganov, E. A.

    2010-12-01

    Siberia Integrated Regional Study (SIRS, http://sirs.scert.ru/en/) is a NEESPI megaproject coordinating national and international activity in the region in line with Earth System Science Program approach whose overall objectives are to understand impact of Global change on on-going regional climate and ecosystems dynamics; to study future potential changes in both, and to estimate possible influence of those processes on the whole Earth System dynamics. Needs for SIRS are caused by accelerated warming occurring in Siberia, complexity of on-going and potential land-surface processes sharpened by inherent hydrology pattern and permafrost presence, and lack of reliable high-resolution meteorological and climatic modeling data. The SIRS approaches include coordination of different scale national and international projects, capacity building targeted to early career researchers thematic education and training, and development of distributed information-computational infrastructure required in support of multidisciplinary teams of specialists performing cooperative work with tools for sharing of data, models and knowledge. Coordination within SIRS projects is devoted to major regional and global risks rising with regional environment changes and currently is concentrated on three interrelated problems, whose solution has strong regional environmental and socio-economical impacts and is very important for understanding potential change of the whole Earth System dynamics: Permafrost border shift, which seriously threatens the oil and gas transporting infrastructure and leads to additional carbon release; Desert - steppe- forest-tundra ecosystems changes, which might vary region input into global carbon cycle as well as provoke serious socio-economical consequences for local population; and Temperature/precipitation/hydrology regime changes, which might increase risks of forest and peat fires, thus causing significant carbon release from the region under study. Some

  13. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approach include instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  14. Szekeres models: a covariant approach

    CERN Document Server

    Apostolopoulos, Pantelis S

    2016-01-01

    We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  15. Matrix Model Approach to Cosmology

    CERN Document Server

    Chaney, A; Stern, A

    2015-01-01

    We perform a systematic search for rotationally invariant cosmological solutions to matrix models, or more specifically the bosonic sector of Lorentzian IKKT-type matrix models, in dimensions $d$ less than ten, specifically $d=3$ and $d=5$. After taking a continuum (or commutative) limit they yield $d-1$ dimensional space-time surfaces, with an attached Poisson structure, which can be associated with closed, open or static cosmologies. For $d=3$, we obtain recursion relations from which it is possible to generate rotationally invariant matrix solutions which yield open universes in the continuum limit. Specific examples of matrix solutions have also been found which are associated with closed and static two-dimensional space-times in the continuum limit. The solutions provide for a matrix resolution of cosmological singularities. The commutative limit reveals other desirable features, such as a solution describing a smooth transition from an initial inflation to a noninflationary era. Many of the $d=3$ soluti...

  16. A new approach to adaptive data models

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2016-12-01

    Full Text Available Over the last decade, there has been a substantial increase in the volume and complexity of data we collect, store and process. We are now aware of the increasing demand for real time data processing in every continuous business process that evolves within the organization. We witness a shift from a traditional static data approach to a more adaptive model approach. This article aims to extend understanding in the field of data models used in information systems by examining how an adaptive data model approach for managing business processes can help organizations accommodate on the fly and build dynamic capabilities to react in a dynamic environment.

  17. Modeling software behavior a craftsman's approach

    CERN Document Server

    Jorgensen, Paul C

    2009-01-01

    A common problem with most texts on requirements specifications is that they emphasize structural models to the near exclusion of behavioral models-focusing on what the software is, rather than what it does. If they do cover behavioral models, the coverage is brief and usually focused on a single model. Modeling Software Behavior: A Craftsman's Approach provides detailed treatment of various models of software behavior that support early analysis, comprehension, and model-based testing. Based on the popular and continually evolving course on requirements specification models taught by the auth

  18. Multicomponent Equilibrium Models for Testing Geothermometry Approaches

    Energy Technology Data Exchange (ETDEWEB)

    Carl D. Palmer; Robert W. Smith; Travis L. McLing

    2013-02-01

    Geothermometry is an important tool for estimating deep reservoir temperature from the geochemical composition of shallower and cooler waters. The underlying assumption of geothermometry is that the waters collected from shallow wells and seeps maintain a chemical signature that reflects equilibrium in the deeper reservoir. Many of the geothermometers used in practice are based on correlation between water temperatures and composition or using thermodynamic calculations based a subset (typically silica, cations or cation ratios) of the dissolved constituents. An alternative approach is to use complete water compositions and equilibrium geochemical modeling to calculate the degree of disequilibrium (saturation index) for large number of potential reservoir minerals as a function of temperature. We have constructed several “forward” geochemical models using The Geochemist’s Workbench to simulate the change in chemical composition of reservoir fluids as they migrate toward the surface. These models explicitly account for the formation (mass and composition) of a steam phase and equilibrium partitioning of volatile components (e.g., CO2, H2S, and H2) into the steam as a result of pressure decreases associated with upward fluid migration from depth. We use the synthetic data generated from these simulations to determine the advantages and limitations of various geothermometry and optimization approaches for estimating the likely conditions (e.g., temperature, pCO2) to which the water was exposed in the deep subsurface. We demonstrate the magnitude of errors that can result from boiling, loss of volatiles, and analytical error from sampling and instrumental analysis. The estimated reservoir temperatures for these scenarios are also compared to conventional geothermometers. These results can help improve estimation of geothermal resource temperature during exploration and early development.

  19. Current approaches to gene regulatory network modelling

    Directory of Open Access Journals (Sweden)

    Brazma Alvis

    2007-09-01

    Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.

  20. Distributed simulation a model driven engineering approach

    CERN Document Server

    Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent

    2016-01-01

    Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.

  1. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  2. One approach for Management by Objectives and Results in Scandinavia?

    DEFF Research Database (Denmark)

    Kristiansen, Mads Bøge

    2016-01-01

    is to empirically test the argument concerning the existence of one Nordic perspective on performance management. The paper presents a comparative study of Management by Objectives and Results (MBOR) in Prison and Probation Services, Food Safety, and Meteorology in Denmark, Norway and Sweden. The paper examines......Viewed from abroad, Denmark, Norway and Sweden look very similar. In the literature on public management reforms and performance management, these countries are frequently regarded as one, and the literature often refers to a specific Nordic or Scandinavian model. The aim of this paper...... in which MBOR is implemented. An important implication therefore is that it is unlikely that there is ‘one best way’ of managing or steering an agency, and MBOR will appear and function differently in different contexts....

  3. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  4. Comparison of topic extraction approaches and their results

    NARCIS (Netherlands)

    Velden, Theresa; Boyack, Kevin W.; Gläser, Jochen; Koopman, Rob; Scharnhorst, Andrea; Wang, Shenghui

    2017-01-01

    This is the last paper in the Synthesis section of this special issue on ‘Same Data, Different Results’. We first provide a framework of how to describe and distinguish approaches to topic extraction

  5. A semiparametric approach to physiological flow models.

    Science.gov (United States)

    Verotta, D; Sheiner, L B; Ebling, W F; Stanski, D R

    1989-08-01

    By regarding sampled tissues in a physiological model as linear subsystems, the usual advantages of flow models are preserved while mitigating two of their disadvantages, (i) the need for assumptions regarding intratissue kinetics, and (ii) the need to simultaneously fit data from several tissues. To apply the linear systems approach, both arterial blood and (interesting) tissue drug concentrations must be measured. The body is modeled as having an arterial compartment (A) distributing drug to different linear subsystems (tissues), connected in a specific way by blood flow. The response (CA, with dimensions of concentration) of A is measured. Tissues receive input from A (and optionally from other tissues), and send output to the outside or to other parts of the body. The response (CT, total amount of drug in the tissue (T) divided by the volume of T) from the T-th one, for example, of such tissues is also observed. From linear systems theory, CT can be expressed as the convolution of CA with a disposition function, F(t) (with dimensions 1/time). The function F(t) depends on the (unknown) structure of T, but has certain other constant properties: The integral integral infinity0 F(t) dt is the steady state ratio of CT to CA, and the point F(0) is the clearance rate of drug from A to T divided by the volume of T. A formula for the clearance rate of drug from T to outside T can be derived. To estimate F(t) empirically, and thus mitigate disadvantage (i), we suggest that, first, a nonparametric (or parametric) function be fitted to CA data yielding predicted values, CA, and, second, the convolution integral of CA with F(t) be fitted to CT data using a deconvolution method. By so doing, each tissue's data are analyzed separately, thus mitigating disadvantage (ii). A method for system simulation is also proposed. The results of applying the approach to simulated data and to real thiopental data are reported.

  6. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models ch...

  7. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    Directory of Open Access Journals (Sweden)

    K.S. Kuppusamy,

    2011-03-01

    Full Text Available The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines inorder to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web pagesegmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approachinclude instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  8. Modeling diffuse pollution with a distributed approach.

    Science.gov (United States)

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  9. An integrated approach to permeability modeling using micro-models

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)

    2008-10-15

    An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.

  10. MODULAR APPROACH WITH ROUGH DECISION MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-09-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  11. Modular Approach with Rough Decision Models

    Directory of Open Access Journals (Sweden)

    Ahmed T. Shawky

    2012-10-01

    Full Text Available Decision models which adopt rough set theory have been used effectively in many real world applications.However, rough decision models suffer the high computational complexity when dealing with datasets ofhuge size. In this research we propose a new rough decision model that allows making decisions based onmodularity mechanism. According to the proposed approach, large-size datasets can be divided intoarbitrary moderate-size datasets, then a group of rough decision models can be built as separate decisionmodules. The overall model decision is computed as the consensus decision of all decision modulesthrough some aggregation technique. This approach provides a flexible and a quick way for extractingdecision rules of large size information tables using rough decision models.

  12. Modeling approach suitable for energy system

    Energy Technology Data Exchange (ETDEWEB)

    Goetschel, D. V.

    1979-01-01

    Recently increased attention has been placed on optimization problems related to the determination and analysis of operating strategies for energy systems. Presented in this paper is a nonlinear model that can be used in the formulation of certain energy-conversion systems-modeling problems. The model lends itself nicely to solution approaches based on nonlinear-programming algorithms and, in particular, to those methods falling into the class of variable metric algorithms for nonlinearly constrained optimization.

  13. A moving approach for the Vector Hysteron Model

    Energy Technology Data Exchange (ETDEWEB)

    Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)

    2016-04-01

    A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.

  14. A Bayesian Model Committee Approach to Forecasting Global Solar Radiation

    CERN Document Server

    Lauret, Philippe; Muselli, Marc; David, Mathieu; Diagne, Hadja; Voyant, Cyril

    2012-01-01

    This paper proposes to use a rather new modelling approach in the realm of solar radiation forecasting. In this work, two forecasting models: Autoregressive Moving Average (ARMA) and Neural Network (NN) models are combined to form a model committee. The Bayesian inference is used to affect a probability to each model in the committee. Hence, each model's predictions are weighted by their respective probability. The models are fitted to one year of hourly Global Horizontal Irradiance (GHI) measurements. Another year (the test set) is used for making genuine one hour ahead (h+1) out-of-sample forecast comparisons. The proposed approach is benchmarked against the persistence model. The very first results show an improvement brought by this approach.

  15. Stormwater infiltration trenches: a conceptual modelling approach.

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare

    2009-01-01

    In recent years, limitations linked to traditional urban drainage schemes have been pointed out and new approaches are developing introducing more natural methods for retaining and/or disposing of stormwater. These mitigation measures are generally called Best Management Practices or Sustainable Urban Drainage System and they include practices such as infiltration and storage tanks in order to reduce the peak flow and retain part of the polluting components. The introduction of such practices in urban drainage systems entails an upgrade of existing modelling frameworks in order to evaluate their efficiency in mitigating the impact of urban drainage systems on receiving water bodies. While storage tank modelling approaches are quite well documented in literature, some gaps are still present about infiltration facilities mainly dependent on the complexity of the involved physical processes. In this study, a simplified conceptual modelling approach for the simulation of the infiltration trenches is presented. The model enables to assess the performance of infiltration trenches. The main goal is to develop a model that can be employed for the assessment of the mitigation efficiency of infiltration trenches in an integrated urban drainage context. Particular care was given to the simulation of infiltration structures considering the performance reduction due to clogging phenomena. The proposed model has been compared with other simplified modelling approaches and with a physically based model adopted as benchmark. The model performed better compared to other approaches considering both unclogged facilities and the effect of clogging. On the basis of a long-term simulation of six years of rain data, the performance and the effectiveness of an infiltration trench measure are assessed. The study confirmed the important role played by the clogging phenomenon on such infiltration structures.

  16. Challenges in structural approaches to cell modeling.

    Science.gov (United States)

    Im, Wonpil; Liang, Jie; Olson, Arthur; Zhou, Huan-Xiang; Vajda, Sandor; Vakser, Ilya A

    2016-07-31

    Computational modeling is essential for structural characterization of biomolecular mechanisms across the broad spectrum of scales. Adequate understanding of biomolecular mechanisms inherently involves our ability to model them. Structural modeling of individual biomolecules and their interactions has been rapidly progressing. However, in terms of the broader picture, the focus is shifting toward larger systems, up to the level of a cell. Such modeling involves a more dynamic and realistic representation of the interactomes in vivo, in a crowded cellular environment, as well as membranes and membrane proteins, and other cellular components. Structural modeling of a cell complements computational approaches to cellular mechanisms based on differential equations, graph models, and other techniques to model biological networks, imaging data, etc. Structural modeling along with other computational and experimental approaches will provide a fundamental understanding of life at the molecular level and lead to important applications to biology and medicine. A cross section of diverse approaches presented in this review illustrates the developing shift from the structural modeling of individual molecules to that of cell biology. Studies in several related areas are covered: biological networks; automated construction of three-dimensional cell models using experimental data; modeling of protein complexes; prediction of non-specific and transient protein interactions; thermodynamic and kinetic effects of crowding; cellular membrane modeling; and modeling of chromosomes. The review presents an expert opinion on the current state-of-the-art in these various aspects of structural modeling in cellular biology, and the prospects of future developments in this emerging field. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    Science.gov (United States)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  18. A Joint Approach to the Study of S-Type and P-Type Habitable Zones in Binary Systems: New Results in the View of 3-D Planetary Climate Models

    Science.gov (United States)

    Cuntz, Manfred

    2015-01-01

    In two previous papers, given by Cuntz (2014a,b) [ApJ 780, A14 (19 pages); arXiv:1409.3796], a comprehensive approach has been provided for the study of S-type and P-type habitable zones in stellar binary systems, P-type orbits occur when the planet orbits both binary components, whereas in case of S-type orbits, the planet orbits only one of the binary components with the second component considered a perturbator. The selected approach considers a variety of aspects, including (1) the consideration of a joint constraint including orbital stability and a habitable region for a possible system planet through the stellar radiative energy fluxes; (2) the treatment of conservative (CHZ), general (GHZ) and extended zones of habitability (EHZ) [see Paper I for definitions] for the systems as previously defined for the Solar System; (3) the provision of a combined formalism for the assessment of both S-type and P-type habitability; in particular, mathematical criteria are devised for which kind of system S-type and P-type habitability is realized; and (4) the applications of the theoretical approach to systems with the stars in different kinds of orbits, including elliptical orbits (the most expected case). Particularly, an algebraic formalism for the assessment of both S-type and P-type habitability is given based on a higher-order polynomial expression. Thus, an a prior specification for the presence or absence of S-type or P-type radiative habitable zones is - from a mathematical point of view - neither necessary nor possible, as those are determined by the adopted formalism. Previously, numerous applications of the method have been given encompassing theoretical star-panet systems and and observations. Most recently, this method has been upgraded to include recent studies of 3-D planetary climate models. Originally, this type of work affects the extent and position of habitable zones around single stars; however, it has also profound consequence for the habitable

  19. A New Detection Approach Based on the Maximum Entropy Model

    Institute of Scientific and Technical Information of China (English)

    DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua

    2006-01-01

    The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.

  20. Quantum Machine and SR Approach: a Unified Model

    CERN Document Server

    Garola, C; Sozzo, S; Garola, Claudio; Pykacz, Jaroslav; Sozzo, Sandro

    2005-01-01

    The Geneva-Brussels approach to quantum mechanics (QM) and the semantic realism (SR) nonstandard interpretation of QM exhibit some common features and some deep conceptual differences. We discuss in this paper two elementary models provided in the two approaches as intuitive supports to general reasonings and as a proof of consistency of general assumptions, and show that Aerts' quantum machine can be embodied into a macroscopic version of the microscopic SR model, overcoming the seeming incompatibility between the two models. This result provides some hints for the construction of a unified perspective in which the two approaches can be properly placed.

  1. Helicopter vibration isolation: Design approach and test results

    Science.gov (United States)

    Lee, C.-M.; Goverdovskiy, V. N.; Sotenko, A. V.

    2016-03-01

    This paper presents a strategy based on the approach of designing and inserting into helicopter vibration isolation systems mountable mechanisms with springs of adjustable sign-changing stiffness for system stiffness control. A procedure to extend the effective area of stiffness control is presented; a set of parameters for sensitivity analysis and practical mechanism design is formulated. The validity and flexibility of the approach are illustrated by application to crewmen seat suspensions and vibration isolators for equipment protection containers. The strategy provides minimization of vibrations, especially in the infra-low frequency range which is the most important for crewmen efficiency and safety of the equipment. This also would prevent performance degradation of some operating systems. The effectiveness is demonstrated through measured data obtained from development and parallel flight tests of new and operating systems.

  2. Functional state modelling approach validation for yeast and bacteria cultivations

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-01-01

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach. PMID:26740778

  3. Functional state modelling approach validation for yeast and bacteria cultivations.

    Science.gov (United States)

    Roeva, Olympia; Pencheva, Tania

    2014-09-03

    In this paper, the functional state modelling approach is validated for modelling of the cultivation of two different microorganisms: yeast (Saccharomyces cerevisiae) and bacteria (Escherichia coli). Based on the available experimental data for these fed-batch cultivation processes, three different functional states are distinguished, namely primary product synthesis state, mixed oxidative state and secondary product synthesis state. Parameter identification procedures for different local models are performed using genetic algorithms. The simulation results show high degree of adequacy of the models describing these functional states for both S. cerevisiae and E. coli cultivations. Thus, the local models are validated for the cultivation of both microorganisms. This fact is a strong structure model verification of the functional state modelling theory not only for a set of yeast cultivations, but also for bacteria cultivation. As such, the obtained results demonstrate the efficiency and efficacy of the functional state modelling approach.

  4. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  5. Modelling Coagulation Systems: A Stochastic Approach

    CERN Document Server

    Ryazanov, V V

    2011-01-01

    A general stochastic approach to the description of coagulating aerosol system is developed. As the object of description one can consider arbitrary mesoscopic values (number of aerosol clusters, their size etc). The birth-and-death formalism for a number of clusters can be regarded as a partial case of the generalized storage model. An application of the storage model to the number of monomers in a cluster is discussed.

  6. Towards a Multiscale Approach to Cybersecurity Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Emilie A.; Hui, Peter SY; Choudhury, Sutanay; Halappanavar, Mahantesh; Oler, Kiri J.; Joslyn, Cliff A.

    2013-11-12

    We propose a multiscale approach to modeling cyber networks, with the goal of capturing a view of the network and overall situational awareness with respect to a few key properties--- connectivity, distance, and centrality--- for a system under an active attack. We focus on theoretical and algorithmic foundations of multiscale graphs, coming from an algorithmic perspective, with the goal of modeling cyber system defense as a specific use case scenario. We first define a notion of \\emph{multiscale} graphs, in contrast with their well-studied single-scale counterparts. We develop multiscale analogs of paths and distance metrics. As a simple, motivating example of a common metric, we present a multiscale analog of the all-pairs shortest-path problem, along with a multiscale analog of a well-known algorithm which solves it. From a cyber defense perspective, this metric might be used to model the distance from an attacker's position in the network to a sensitive machine. In addition, we investigate probabilistic models of connectivity. These models exploit the hierarchy to quantify the likelihood that sensitive targets might be reachable from compromised nodes. We believe that our novel multiscale approach to modeling cyber-physical systems will advance several aspects of cyber defense, specifically allowing for a more efficient and agile approach to defending these systems.

  7. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...

  8. Post-16 Biology--Some Model Approaches?

    Science.gov (United States)

    Lock, Roger

    1997-01-01

    Outlines alternative approaches to the teaching of difficult concepts in A-level biology which may help student learning by making abstract ideas more concrete and accessible. Examples include models, posters, and poems for illustrating meiosis, mitosis, genetic mutations, and protein synthesis. (DDR)

  9. Modelling road accidents: An approach using structural time series

    Science.gov (United States)

    Junus, Noor Wahida Md; Ismail, Mohd Tahir

    2014-09-01

    In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.

  10. Decomposition approach to model smart suspension struts

    Science.gov (United States)

    Song, Xubin

    2008-10-01

    Model and simulation study is the starting point for engineering design and development, especially for developing vehicle control systems. This paper presents a methodology to build models for application of smart struts for vehicle suspension control development. The modeling approach is based on decomposition of the testing data. Per the strut functions, the data is dissected according to both control and physical variables. Then the data sets are characterized to represent different aspects of the strut working behaviors. Next different mathematical equations can be built and optimized to best fit the corresponding data sets, respectively. In this way, the model optimization can be facilitated in comparison to a traditional approach to find out a global optimum set of model parameters for a complicated nonlinear model from a series of testing data. Finally, two struts are introduced as examples for this modeling study: magneto-rheological (MR) dampers and compressible fluid (CF) based struts. The model validation shows that this methodology can truly capture macro-behaviors of these struts.

  11. An Adaptive Approach to Schema Classification for Data Warehouse Modeling

    Institute of Scientific and Technical Information of China (English)

    Hong-Ding Wang; Yun-Hai Tong; Shao-Hua Tan; Shi-Wei Tang; Dong-Qing Yang; Guo-Hui Sun

    2007-01-01

    Data warehouse (DW) modeling is a complicated task, involving both knowledge of business processes and familiarity with operational information systems structure and behavior. Existing DW modeling techniques suffer from the following major drawbacks -data-driven approach requires high levels of expertise and neglects the requirements of end users, while demand-driven approach lacks enterprise-wide vision and is regardless of existing models of underlying operational systems. In order to make up for those shortcomings, a method of classification of schema elements for DW modeling is proposed in this paper. We first put forward the vector space models for subjects and schema elements, then present an adaptive approach with self-tuning theory to construct context vectors of subjects, and finally classify the source schema elements into different subjects of the DW automatically. Benefited from the result of the schema elements classification, designers can model and construct a DW more easily.

  12. Heat transfer modeling an inductive approach

    CERN Document Server

    Sidebotham, George

    2015-01-01

    This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...

  13. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  14. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  15. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more docu

  16. A unified approach to several results involving integrals of multifunctions

    OpenAIRE

    Balder, E.J.

    2001-01-01

    A well-known equivalence of randomization result of Wald and Wolfowitz states that any Young measure can be regarded as a probability measure on the set of all measurable functions. Here we give a sucient condition for the Young measure to be equivalent to a probability measure on the set of all integrable selectors of a given multifunction. In this way, Aumann's identity for integrals of multifunctions can be interpreted in a novel fashion. By additionally applying a fundamental result from ...

  17. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    OpenAIRE

    K.S. Kuppusamy,; Aghila, G.

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the can...

  18. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  19. Circumcision - A new approach for a different cosmetic result.

    Science.gov (United States)

    Tsikopoulos, G; Asimakidou, M; Smaropoulos, E; Farmakis, K; Klokkaris, A

    2014-04-01

    To assess the difference in aesthetic result after a non-religious circumcision with classic Johnston's technique and a new proposed technique. A total of 76 children were circumcised (not for religious purposes) in a period of 6 years using the classic Johnston's technique (50 patients) and a new proposed technique (26 patients). Parents of circumcised children were interviewed three months after the operation. The aesthetic result was scored by both the parents and the patients as bad, acceptable, good or very good. Scores between the two groups were compared. No major complications were encountered. The aesthetic result score between the two groups had a statistically significant difference (Mann Whitney U Test, presult three months after the operation. In communities in which religious circumcisions are being performed relatively rare, the aesthetic result of a classic method may seem awkward to the patient and his family. Therefore, circumcision being performed for non religious reasons necessitates an acceptable aesthetic result. Our technique fulfills this prerequisite. Hippokratia 2014; 18 (2):116-119.

  20. Engineering model development and test results

    Science.gov (United States)

    Wellman, John A.

    1993-08-01

    The correctability of the primary mirror spherical error in the Wide Field/Planetary Camera (WF/PC) is sensitive to the precise alignment of the incoming aberrated beam onto the corrective elements. Articulating fold mirrors that provide +/- 1 milliradian of tilt in 2 axes are required to allow for alignment corrections in orbit as part of the fix for the Hubble space telescope. An engineering study was made by Itek Optical Systems and the Jet Propulsion Laboratory (JPL) to investigate replacement of fixed fold mirrors within the existing WF/PC optical bench with articulating mirrors. The study contract developed the base line requirements, established the suitability of lead magnesium niobate (PMN) actuators and evaluated several tilt mechanism concepts. Two engineering model articulating mirrors were produced to demonstrate the function of the tilt mechanism to provide +/- 1 milliradian of tilt, packaging within the space constraints and manufacturing techniques including the machining of the invar tilt mechanism and lightweight glass mirrors. The success of the engineering models led to the follow on design and fabrication of 3 flight mirrors that have been incorporated into the WF/PC to be placed into the Hubble Space Telescope as part of the servicing mission scheduled for late 1993.

  1. A unified approach to several results involving integrals of multifunctions

    NARCIS (Netherlands)

    Balder, E.J.

    2001-01-01

    A well-known equivalence of randomization result of Wald and Wolfowitz states that any Young measure can be regarded as a probability measure on the set of all measurable functions. Here we give a sucient condition for the Young measure to be equivalent to a probability measure on the set of all int

  2. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  3. Generalizing Merton's approach of pricing risky debt: some closed-form results

    Science.gov (United States)

    Wang, D. F.

    In this work, I generalize Merton's approach of pricing risky debt to the case where the interest rate risk is modeled by the CIR term structure. Closed-form result for pricing the debt is given for the case where the firm value has non-zero correlation with the interest rate. This extends previous closed-form pricing formular of zero-correlation case to the generic one of non-zero correlation between the firm value and the interest rate.

  4. New Results From Hinode: A Systems Science Approach To Heliophysics

    Science.gov (United States)

    Cirtain, Jonathan W.

    2011-01-01

    Recent results from the analysis of Hinode data have been used to determine the origins of the fast and slow solar wind, possible heating sources for the solar corona, and onset locations for CMEs and polar x-ray jets. Using this information, and data collected by other observatories, major advances in the understanding of Heliophysics are now possible. These Hinode observations, and the techniques for analysis of the Hinode data will be discussed."

  5. Structural dialectical approach in psychology: problems and research results

    Directory of Open Access Journals (Sweden)

    Veraksa, Nikolay E.

    2013-06-01

    Full Text Available In this article dialectical thinking is regarded as one of the central cognitive processes. Because of this cognitive function we can analyze the development of processes and objects. It also determines the possibilities for the creative transformation of some content and for solving problems. The article presents a description and the results of experimental studies. This evidence proves that dialectical thinking is a specific line of cognitive development in children and adults. This line can degrade during school time if the educational program follows formal logical principles, or it can become significantly stronger if the pedagogy is based on dialectical methodology.

  6. Multiscale Model Approach for Magnetization Dynamics Simulations

    CERN Document Server

    De Lucia, Andrea; Tretiakov, Oleg A; Kläui, Mathias

    2016-01-01

    Simulations of magnetization dynamics in a multiscale environment enable rapid evaluation of the Landau-Lifshitz-Gilbert equation in a mesoscopic sample with nanoscopic accuracy in areas where such accuracy is required. We have developed a multiscale magnetization dynamics simulation approach that can be applied to large systems with spin structures that vary locally on small length scales. To implement this, the conventional micromagnetic simulation framework has been expanded to include a multiscale solving routine. The software selectively simulates different regions of a ferromagnetic sample according to the spin structures located within in order to employ a suitable discretization and use either a micromagnetic or an atomistic model. To demonstrate the validity of the multiscale approach, we simulate the spin wave transmission across the regions simulated with the two different models and different discretizations. We find that the interface between the regions is fully transparent for spin waves with f...

  7. AR-aging as a new approach for enhanced results

    Science.gov (United States)

    Hays, C.; Stemmler, R. P.

    1998-12-01

    MAR-aging steels have earned a niche in the metal market arena, especially where aerospace and outerspace applications are concerned. MAR-aging steels owe their high strength, excellent fracture toughness, and good ductility to a precipitation-hardening (aging) mechanism that has been debated by scientists for several years. Because of today’s trend toward more demanding design requirements and a continuing need to better understand the MAR-aging family of materials, six different alloys (C-200, C-250, C-300, C-350, T-250, and T-300) were selected for study using a singular processing treatment: a hot-wall zone-gradient furnace. These alloys were evaluated for the effects of a specific thermal gradient (°C/cm) from 1231 °C (2250 °F) at the hot-wall limit to about 260 °C (500 °F) at the opposite end, the cold wall. All six alloys were evaluated in terms of their microstructure, microhardness, composition, and associated properties as a result of this specific thermal processing method. In this paper, detailed observations on the C-350 alloy are presented, and the results are interpreted in terms of a new heat treatment cycle called AR-aging.

  8. Continuum modeling an approach through practical examples

    CERN Document Server

    Muntean, Adrian

    2015-01-01

    This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.

  9. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    This Ph.D. thesis, A Multivariate Approach to Functional Neuro Modeling, deals with the analysis and modeling of data from functional neuro imaging experiments. A multivariate dataset description is provided which facilitates efficient representation of typical datasets and, more importantly...... and overall conditions governing the functional experiment, via associated micro- and macroscopic variables. The description facilitates an efficient microscopic re-representation, as well as a handle on the link between brain and behavior; the latter is achieved by hypothesizing variations in the micro...... a generalization theoretical framework centered around measures of model generalization error. - Only few, if any, examples of the application of generalization theory to functional neuro modeling currently exist in the literature. - Exemplification of the proposed generalization theoretical framework...

  10. Interfacial Fluid Mechanics A Mathematical Modeling Approach

    CERN Document Server

    Ajaev, Vladimir S

    2012-01-01

    Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail.  Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also:  Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...

  11. Application of the Interface Approach in Quantum Ising Models

    OpenAIRE

    Sen, Parongama

    1997-01-01

    We investigate phase transitions in the Ising model and the ANNNI model in transverse field using the interface approach. The exact result of the Ising chain in a transverse field is reproduced. We find that apart from the interfacial energy, there are two other response functions which show simple scaling behaviour. For the ANNNI model in a transverse field, the phase diagram can be fully studied in the region where a ferromagnetic to paramagnetic phase transition occurs. The other region ca...

  12. Systematic approach to MIS model creation

    Directory of Open Access Journals (Sweden)

    Macura Perica

    2004-01-01

    Full Text Available In this paper-work, by application of basic principles of general theory of system (systematic approach, we have formulated a model of marketing information system. Bases for research were basic characteristics of systematic approach and marketing system. Informational base for management of marketing system, i.e. marketing instruments was presented in a way that the most important information for decision making were listed per individual marketing mix instruments. In projected model of marketing information system, information listed in this way create a base for establishing of data bases, i.e. bases of information (data bases of: product, price, distribution, promotion. This paper-work gives basic preconditions for formulation and functioning of the model. Model was presented by explication of elements of its structure (environment, data bases operators, analysts of information system, decision makers - managers, i.e. input, process, output, feedback and relations between these elements which are necessary for its optimal functioning. Beside that, here are basic elements for implementation of the model into business system, as well as conditions for its efficient functioning and development.

  13. Hybrid continuum-atomistic approach to model electrokinetics in nanofluidics

    Energy Technology Data Exchange (ETDEWEB)

    Amani, Ehsan, E-mail: eamani@aut.ac.ir; Movahed, Saeid, E-mail: smovahed@aut.ac.ir

    2016-06-07

    In this study, for the first time, a hybrid continuum-atomistic based model is proposed for electrokinetics, electroosmosis and electrophoresis, through nanochannels. Although continuum based methods are accurate enough to model fluid flow and electric potential in nanofluidics (in dimensions larger than 4 nm), ionic concentration is too low in nanochannels for the continuum assumption to be valid. On the other hand, the non-continuum based approaches are too time-consuming and therefore is limited to simple geometries, in practice. Here, to propose an efficient hybrid continuum-atomistic method of modelling the electrokinetics in nanochannels; the fluid flow and electric potential are computed based on continuum hypothesis coupled with an atomistic Lagrangian approach for the ionic transport. The results of the model are compared to and validated by the results of the molecular dynamics technique for a couple of case studies. Then, the influences of bulk ionic concentration, external electric field, size of nanochannel, and surface electric charge on the electrokinetic flow and ionic mass transfer are investigated, carefully. The hybrid continuum-atomistic method is a promising approach to model more complicated geometries and investigate more details of the electrokinetics in nanofluidics. - Highlights: • A hybrid continuum-atomistic model is proposed for electrokinetics in nanochannels. • The model is validated by molecular dynamics. • This is a promising approach to model more complicated geometries and physics.

  14. Asteroid fragmentation approaches for modeling atmospheric energy deposition

    Science.gov (United States)

    Register, Paul J.; Mathias, Donovan L.; Wheeler, Lorien F.

    2017-03-01

    During asteroid entry, energy is deposited in the atmosphere through thermal ablation and momentum-loss due to aerodynamic drag. Analytic models of asteroid entry and breakup physics are used to compute the energy deposition, which can then be compared against measured light curves and used to estimate ground damage due to airburst events. This work assesses and compares energy deposition results from four existing approaches to asteroid breakup modeling, and presents a new model that combines key elements of those approaches. The existing approaches considered include a liquid drop or "pancake" model where the object is treated as a single deforming body, and a set of discrete fragment models where the object breaks progressively into individual fragments. The new model incorporates both independent fragments and aggregate debris clouds to represent a broader range of fragmentation behaviors and reproduce more detailed light curve features. All five models are used to estimate the energy deposition rate versus altitude for the Chelyabinsk meteor impact, and results are compared with an observationally derived energy deposition curve. Comparisons show that four of the five approaches are able to match the overall observed energy deposition profile, but the features of the combined model are needed to better replicate both the primary and secondary peaks of the Chelyabinsk curve.

  15. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  16. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  17. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  18. Regularization of turbulence - a comprehensive modeling approach

    Science.gov (United States)

    Geurts, B. J.

    2011-12-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fluid turbulence, alternative coarsened descriptions need to be developed to cope with the wide range of length and time scales. These coarsened descriptions are known as large-eddy simulations in which one aims to capture only the primary features of a flow, at considerably reduced computational effort. Such coarsening introduces a closure problem that requires additional phenomenological modeling. A systematic approach to the closure problem, know as regularization modeling, will be reviewed. Its application to multiphase turbulent will be illustrated in which a basic regularization principle is enforced to physically consistently approximate momentum and scalar transport. Examples of Leray and LANS-alpha regularization are discussed in some detail, as are compatible numerical strategies. We illustrate regularization modeling to turbulence under the influence of rotation and buoyancy and investigate the accuracy with which particle-laden flow can be represented. A discussion of the numerical and modeling errors incurred will be given on the basis of homogeneous isotropic turbulence.

  19. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    Directory of Open Access Journals (Sweden)

    Eser ÖRDEM

    2013-06-01

    Full Text Available Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002 and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings. This model was created by the researcher as a result of the studies carried out in applied linguistics (Hill, 20009 and memory (Murphy, 2004. Since one of the main problems of foreign language learners is to retrieve what they have learnt, Lewis (1998 and Wray (2008 assume that lexical approach is an alternative explanation to solve this problem.Unlike grammar translation method, this approach supports the idea that language is not composed of general grammar but strings of word and word combinations.In addition, lexical approach posits the idea that each word has tiw gramamtical properties, and therefore each dictionary is a potential grammar book. Foreign language learners can learn to use collocations, a basic principle of Lexical approach. Thus, learners can increase the level of retention.The concept of retrieval clue (Murphy, 2004 is considered the main element in this collocational study model because the main purpose of this model is boost fluency and help learners gain native-like accuracy while producing the target language. Keywords: Foreign language teaching, lexical approach, collocations, retrieval clue

  20. Metamodelling Approach and Software Tools for Physical Modelling and Simulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Mezhuyev

    2015-02-01

    Full Text Available In computer science, metamodelling approach becomes more and more popular for the purpose of software systems development. In this paper, we discuss applicability of the metamodelling approach for development of software tools for physical modelling and simulation.To define a metamodel for physical modelling the analysis of physical models will be done. The result of such the analyses will show the invariant physical structures, we propose to use as the basic abstractions of the physical metamodel. It is a system of geometrical objects, allowing to build a spatial structure of physical models and to set a distribution of physical properties. For such geometry of distributed physical properties, the different mathematical methods can be applied. To prove the proposed metamodelling approach, we consider the developed prototypes of software tools.

  1. Likelihood Approach to the First Dark Matter Results from XENON100

    CERN Document Server

    Aprile, E; Arneodo, F; Askin, A; Baudis, L; Behrens, A; Bokeloh, K; Brown, E; Bruch, T; Cardoso, J M R; Choi, B; Cline, D; Duchovni, E; Fattori, S; Ferella, A D; Giboni, K -L; Gross, E; Kish, A; Lam, C W; Lamblin, J; Lang, R F; Lim, K E; Lindemann, S; Lindner, M; Lopes, J A M; Undagoitia, T Marrodán; Mei, Y; Fernandez, A J Melgarejo; Ni, K; Oberlack, U; Orrigo, S E A; Pantic, E; Plante, G; Ribeiro, A C C; Santorelli, R; Santos, J M F dos; Schumann, M; Shagin, P; Teymourian, A; Thers, D; Tziaferi, E; Vitells, O; Wang, H; Weber, M; Weinheimer, C

    2011-01-01

    Many experiments that aim at the direct detection of Dark Matter are able to distinguish a dominant background from the expected feeble signals, based on some measured discrimination parameter. We develop a statistical model for such experiments using the Profile Likelihood ratio as a test statistic in a frequentist approach. We take data from calibrations as control measurements for signal and background, and the method allows the inclusion of data from Monte Carlo simulations. Systematic detector uncertainties, such as uncertainties in the energy scale, as well as astrophysical uncertainties, are included in the model. The statistical model can be used to either set an exclusion limit or to make a discovery claim, and the results are derived with a proper treatment of statistical and systematic uncertainties. We apply the model to the first data release of the XENON100 experiment, which allows to extract additional information from the data, and place stronger limits on the spin-independent elastic WIMP-nuc...

  2. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  3. Subsea Permafrost Climate Modeling - Challenges and First Results

    Science.gov (United States)

    Rodehacke, C. B.; Stendel, M.; Marchenko, S. S.; Christensen, J. H.; Romanovsky, V. E.; Nicolsky, D.

    2015-12-01

    Recent observations indicate that the East Siberian Arctic Shelf (ESAS) releases methane, which stems from shallow hydrate seabed reservoirs. The total amount of carbon within the ESAS is so large that release of only a small fraction, for example via taliks, which are columns of unfrozen sediment within the permafrost, could impact distinctly the global climate. Therefore it is crucial to simulate the future fate of ESAS' subsea permafrost with regard to changing atmospheric and oceanic conditions. However only very few attempts to address the vulnerability of subsea permafrost have been made, instead most studies have focused on the evolution of permafrost since the Late Pleistocene ocean transgression, approximately 14000 years ago.In contrast to land permafrost modeling, any attempt to model the future fate of subsea permafrost needs to consider several additional factors, in particular the dependence of freezing temperature on water depth and salt content and the differences in ground heat flux depending on the seabed properties. Also the amount of unfrozen water in the sediment needs to be taken into account. Using a system of coupled ocean, atmosphere and permafrost models will allow us to capture the complexity of the different parts of the system and evaluate the relative importance of different processes. Here we present the first results of a novel approach by means of dedicated permafrost model simulations. These have been driven by conditions of the Laptev Sea region in East Siberia. By exploiting the ensemble approach, we will show how uncertainties in boundary conditions and applied forcing scenarios control the future fate of the sub sea permafrost.

  4. Numerical modelling approach for mine backfill

    Indian Academy of Sciences (India)

    MUHAMMAD ZAKA EMAD

    2017-09-01

    Numerical modelling is broadly used for assessing complex scenarios in underground mines, including mining sequence and blast-induced vibrations from production blasting. Sublevel stoping mining methods with delayed backfill are extensively used to exploit steeply dipping ore bodies by Canadian hard-rockmetal mines. Mine backfill is an important constituent of mining process. Numerical modelling of mine backfill material needs special attention as the numerical model must behave realistically and in accordance with the site conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. Themodelling strategy is studied using a case study mine from Canadian mining industry. In the end, results of numerical model parametric study are shown and discussed.

  5. Development and Results of a First Generation Least Expensive Approach to Fission: Module Tests and Results

    Science.gov (United States)

    Houts, Mike; Godfroy, Tom; Pederson, Kevin; Sena, J. Tom; VanDyke, Melissa; Dickens, Ricky; Reid, Bob J.; Martin, Jim

    2000-01-01

    The use of resistance heaters to simulate heat from fission allows extensive development of fission systems to be performed in non-nuclear test facilities, saving time and money. Resistance heated tests on the Module Unfueled Thermal-hydraulic Test (MUTT) article has been performed at the Marshall Space Flight Center. This paper discusses the results of these experiments and identifies future tests to be performed.

  6. AN AUTOMATIC APPROACH TO BOX & JENKINS MODELLING

    OpenAIRE

    MARCELO KRIEGER

    1983-01-01

    Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...

  7. Modeling in transport phenomena a conceptual approach

    CERN Document Server

    Tosun, Ismail

    2007-01-01

    Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to

  8. Modeling for fairness: A Rawlsian approach.

    Science.gov (United States)

    Diekmann, Sven; Zwart, Sjoerd D

    2014-06-01

    In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls' overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders' interests are given due and equal attention. For reaching such fair agreement, we apply Rawls' original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders' beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders' beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.

  9. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  10. Modeling Social Annotation: a Bayesian Approach

    CERN Document Server

    Plangprasopchok, Anon

    2008-01-01

    Collaborative tagging systems, such as del.icio.us, CiteULike, and others, allow users to annotate objects, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations, contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of socially-generated data, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated objects. Unfortunately, our proposed approach had a number of shortcomings, including overfitting, local maxima and the requirement to specify values for some parameters. In this paper we address these shortcomings in two ways. First, we extend the model to a fully Bayesian framework. Second, we describe an infinite ver...

  11. DISTRIBUTED APPROACH to WEB PAGE CATEGORIZATION USING MAPREDUCE PROGRAMMING MODEL

    Directory of Open Access Journals (Sweden)

    P.Malarvizhi

    2011-12-01

    Full Text Available The web is a large repository of information and to facilitate the search and retrieval of pages from it,categorization of web documents is essential. An effective means to handle the complexity of information retrieval from the internet is through automatic classification of web pages. Although lots of automatic classification algorithms and systems have been presented, most of the existing approaches are computationally challenging. In order to overcome this challenge, we have proposed a parallel algorithm, known as MapReduce programming model to automatically categorize the web pages. This approach incorporates three concepts. They are web crawler, MapReduce programming model and the proposed web page categorization approach. Initially, we have utilized web crawler to mine the World Wide Web and the crawled web pages are then directly given as input to the MapReduce programming model. Here the MapReduce programming model adapted to our proposed web page categorization approach finds the appropriate category of the web page according to its content. The experimental results show that our proposed parallel web page categorization approach achieves satisfactory results in finding the right category for any given web page.

  12. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  13. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  14. MULTI MODEL DATA MINING APPROACH FOR HEART FAILURE PREDICTION

    Directory of Open Access Journals (Sweden)

    Priyanka H U

    2016-09-01

    Full Text Available Developing predictive modelling solutions for risk estimation is extremely challenging in health-care informatics. Risk estimation involves integration of heterogeneous clinical sources having different representation from different health-care provider making the task increasingly complex. Such sources are typically voluminous, diverse, and significantly change over the time. Therefore, distributed and parallel computing tools collectively termed big data tools are in need which can synthesize and assist the physician to make right clinical decisions. In this work we propose multi-model predictive architecture, a novel approach for combining the predictive ability of multiple models for better prediction accuracy. We demonstrate the effectiveness and efficiency of the proposed work on data from Framingham Heart study. Results show that the proposed multi-model predictive architecture is able to provide better accuracy than best model approach. By modelling the error of predictive models we are able to choose sub set of models which yields accurate results. More information was modelled into system by multi-level mining which has resulted in enhanced predictive accuracy.

  15. A new approach of high speed cutting modelling: SPH method

    OpenAIRE

    LIMIDO, Jérôme; Espinosa, Christine; Salaün, Michel; Lacome, Jean-Luc

    2006-01-01

    The purpose of this study is to introduce a new approach of high speed cutting numerical modelling. A lagrangian Smoothed Particle Hydrodynamics (SPH) based model is carried out using the Ls-Dyna software. SPH is a meshless method, thus large material distortions that occur in the cutting problem are easily managed and SPH contact control permits a “natural” workpiece/chip separation. Estimated chip morphology and cutting forces are compared to machining dedicated code results and experimenta...

  16. Infectious disease modeling a hybrid system approach

    CERN Document Server

    Liu, Xinzhi

    2017-01-01

    This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.

  17. Joint Modeling of Multiple Crimes: A Bayesian Spatial Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2017-01-01

    Full Text Available A multivariate Bayesian spatial modeling approach was used to jointly model the counts of two types of crime, i.e., burglary and non-motor vehicle theft, and explore the geographic pattern of crime risks and relevant risk factors. In contrast to the univariate model, which assumes independence across outcomes, the multivariate approach takes into account potential correlations between crimes. Six independent variables are included in the model as potential risk factors. In order to fully present this method, both the multivariate model and its univariate counterpart are examined. We fitted the two models to the data and assessed them using the deviance information criterion. A comparison of the results from the two models indicates that the multivariate model was superior to the univariate model. Our results show that population density and bar density are clearly associated with both burglary and non-motor vehicle theft risks and indicate a close relationship between these two types of crime. The posterior means and 2.5% percentile of type-specific crime risks estimated by the multivariate model were mapped to uncover the geographic patterns. The implications, limitations and future work of the study are discussed in the concluding section.

  18. Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

    Directory of Open Access Journals (Sweden)

    A. Riccio

    2007-12-01

    Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.

    We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.

    This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  19. Seeking for the rational basis of the median model: the optimal combination of multi-model ensemble results

    Directory of Open Access Journals (Sweden)

    A. Riccio

    2007-04-01

    Full Text Available In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides.

    We first introduce the theoretical basis (with its roots sinking into the Bayes theorem and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b.

    This approach also provides a way to systematically reduce (and quantify model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  20. Towards a whole-cell modeling approach for synthetic biology

    Science.gov (United States)

    Purcell, Oliver; Jain, Bonny; Karr, Jonathan R.; Covert, Markus W.; Lu, Timothy K.

    2013-06-01

    Despite rapid advances over the last decade, synthetic biology lacks the predictive tools needed to enable rational design. Unlike established engineering disciplines, the engineering of synthetic gene circuits still relies heavily on experimental trial-and-error, a time-consuming and inefficient process that slows down the biological design cycle. This reliance on experimental tuning is because current modeling approaches are unable to make reliable predictions about the in vivo behavior of synthetic circuits. A major reason for this lack of predictability is that current models view circuits in isolation, ignoring the vast number of complex cellular processes that impinge on the dynamics of the synthetic circuit and vice versa. To address this problem, we present a modeling approach for the design of synthetic circuits in the context of cellular networks. Using the recently published whole-cell model of Mycoplasma genitalium, we examined the effect of adding genes into the host genome. We also investigated how codon usage correlates with gene expression and find agreement with existing experimental results. Finally, we successfully implemented a synthetic Goodwin oscillator in the whole-cell model. We provide an updated software framework for the whole-cell model that lays the foundation for the integration of whole-cell models with synthetic gene circuit models. This software framework is made freely available to the community to enable future extensions. We envision that this approach will be critical to transforming the field of synthetic biology into a rational and predictive engineering discipline.

  1. Flipped models in Trinification: A Comprehensive Approach

    CERN Document Server

    Rodríguez, Oscar; Ponce, William A; Rojas, Eduardo

    2016-01-01

    By considering the 3-3-1 and the left-right symmetric models as low energy effective theories of the trinification group, alternative versions of these models are found. The new neutral gauge bosons in the universal 3-3-1 model and its flipped versions are considered; also, the left-right symmetric model and the two flipped variants of it are also studied. For these models, the couplings of the $Z'$ bosons to the standard model fermions are reported. The explicit form of the null space of the vector boson mass matrix for an arbitrary Higgs tensor and gauge group is also presented. In the general framework of the trinification gauge group, and by using the LHC experimental results and EW precision data, limits on the $Z'$ mass and the mixing angle between $Z$ and the new gauge bosons $Z'$ are imposed. The general results call for very small mixing angles in the range $10^{-3}$ radians and $M_{Z'}$ > 2.5 TeV.

  2. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    Science.gov (United States)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  3. Convergence results for a coarsening model using global linearization

    CERN Document Server

    Gallay, T; Gallay, Th.

    2002-01-01

    We study a coarsening model describing the dynamics of interfaces in the one-dimensional Allen-Cahn equation. Given a partition of the real line into intervals of length greater than one, the model consists in constantly eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can be explicitly solved using a global linearization transformation. This allows us to derive rigorous results on the long-time asymptotics of the solutions. If the average length of the intervals is finite, we prove that all distributions approach a uniquely determined self-similar solution. We also obtain global stability results for the family of self-similar profiles which correspond to distributions with infinite expectation. eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can...

  4. RESULTS OF INTERBANK EXCHANGE RATES FORECASTING USING STATE SPACE MODEL

    Directory of Open Access Journals (Sweden)

    Muhammad Kashif

    2008-07-01

    Full Text Available This study evaluates the performance of three alternative models for forecasting daily interbank exchange rate of U.S. dollar measured in Pak rupees. The simple ARIMA models and complex models such as GARCH-type models and a state space model are discussed and compared. Four different measures are used to evaluate the forecasting accuracy. The main result is the state space model provides the best performance among all the models.

  5. A database approach to information retrieval: The remarkable relationship between language models and region models

    CERN Document Server

    Hiemstra, Djoerd

    2010-01-01

    In this report, we unify two quite distinct approaches to information retrieval: region models and language models. Region models were developed for structured document retrieval. They provide a well-defined behaviour as well as a simple query language that allows application developers to rapidly develop applications. Language models are particularly useful to reason about the ranking of search results, and for developing new ranking approaches. The unified model allows application developers to define complex language modeling approaches as logical queries on a textual database. We show a remarkable one-to-one relationship between region queries and the language models they represent for a wide variety of applications: simple ad-hoc search, cross-language retrieval, video retrieval, and web search.

  6. Evaluating face trustworthiness: a model based approach.

    Science.gov (United States)

    Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N

    2008-06-01

    Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.

  7. Random matrix model approach to chiral symmetry

    CERN Document Server

    Verbaarschot, J J M

    1996-01-01

    We review the application of random matrix theory (RMT) to chiral symmetry in QCD. Starting from the general philosophy of RMT we introduce a chiral random matrix model with the global symmetries of QCD. Exact results are obtained for universal properties of the Dirac spectrum: i) finite volume corrections to valence quark mass dependence of the chiral condensate, and ii) microscopic fluctuations of Dirac spectra. Comparisons with lattice QCD simulations are made. Most notably, the variance of the number of levels in an interval containing $n$ levels on average is suppressed by a factor $(\\log n)/\\pi^2 n$. An extension of the random matrix model model to nonzero temperatures and chemical potential provides us with a schematic model of the chiral phase transition. In particular, this elucidates the nature of the quenched approximation at nonzero chemical potential.

  8. Machine Learning Approaches for Modeling Spammer Behavior

    CERN Document Server

    Islam, Md Saiful; Islam, Md Rafiqul

    2010-01-01

    Spam is commonly known as unsolicited or unwanted email messages in the Internet causing potential threat to Internet Security. Users spend a valuable amount of time deleting spam emails. More importantly, ever increasing spam emails occupy server storage space and consume network bandwidth. Keyword-based spam email filtering strategies will eventually be less successful to model spammer behavior as the spammer constantly changes their tricks to circumvent these filters. The evasive tactics that the spammer uses are patterns and these patterns can be modeled to combat spam. This paper investigates the possibilities of modeling spammer behavioral patterns by well-known classification algorithms such as Na\\"ive Bayesian classifier (Na\\"ive Bayes), Decision Tree Induction (DTI) and Support Vector Machines (SVMs). Preliminary experimental results demonstrate a promising detection rate of around 92%, which is considerably an enhancement of performance compared to similar spammer behavior modeling research.

  9. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  10. A Computationally Efficient State Space Approach to Estimating Multilevel Regression Models and Multilevel Confirmatory Factor Models.

    Science.gov (United States)

    Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai

    2014-01-01

    Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.

  11. and Models: A Self-Similar Approach

    Directory of Open Access Journals (Sweden)

    José Antonio Belinchón

    2013-01-01

    equations (FEs admit self-similar solutions. The methods employed allow us to obtain general results that are valid not only for the FRW metric, but also for all the Bianchi types as well as for the Kantowski-Sachs model (under the self-similarity hypothesis and the power-law hypothesis for the scale factors.

  12. Nonperturbative approach to the modified statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)

    1993-12-01

    The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.

  13. Real-space renormalization group approach to the Anderson model

    Science.gov (United States)

    Campbell, Eamonn

    Many of the most interesting electronic behaviours currently being studied are associated with strong correlations. In addition, many of these materials are disordered either intrinsically or due to doping. Solving interacting systems exactly is extremely computationally expensive, and approximate techniques developed for strongly correlated systems are not easily adapted to include disorder. As a non-interacting disordered model, it makes sense to consider the Anderson model as a first step in developing an approximate method of solution to the interacting and disordered Anderson-Hubbard model. Our renormalization group (RG) approach is modeled on that proposed by Johri and Bhatt [23]. We found an error in their work which we have corrected in our procedure. After testing the execution of the RG, we benchmarked the density of states and inverse participation ratio results against exact diagonalization. Our approach is significantly faster than exact diagonalization and is most accurate in the limit of strong disorder.

  14. Similarity transformation approach to identifiability analysis of nonlinear compartmental models.

    Science.gov (United States)

    Vajda, S; Godfrey, K R; Rabitz, H

    1989-04-01

    Through use of the local state isomorphism theorem instead of the algebraic equivalence theorem of linear systems theory, the similarity transformation approach is extended to nonlinear models, resulting in finitely verifiable sufficient and necessary conditions for global and local identifiability. The approach requires testing of certain controllability and observability conditions, but in many practical examples these conditions prove very easy to verify. In principle the method also involves nonlinear state variable transformations, but in all of the examples presented in the paper the transformations turn out to be linear. The method is applied to an unidentifiable nonlinear model and a locally identifiable nonlinear model, and these are the first nonlinear models other than bilinear models where the reason for lack of global identifiability is nontrivial. The method is also applied to two models with Michaelis-Menten elimination kinetics, both of considerable importance in pharmacokinetics, and for both of which the complicated nature of the algebraic equations arising from the Taylor series approach has hitherto defeated attempts to establish identifiability results for specific input functions.

  15. Approaches and models of intercultural education

    Directory of Open Access Journals (Sweden)

    Iván Manuel Sánchez Fontalvo

    2013-10-01

    Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.   

  16. Modeling Water Shortage Management Using an Object-Oriented Approach

    Science.gov (United States)

    Wang, J.; Senarath, S.; Brion, L.; Niedzialek, J.; Novoa, R.; Obeysekera, J.

    2007-12-01

    As a result of the increasing global population and the resulting urbanization, water shortage issues have received increased attention throughout the world . Water supply has not been able to keep up with increased demand for water, especially during times of drought. The use of an object-oriented (OO) approach coupled with efficient mathematical models is an effective tool in addressing discrepancies between water supply and demand. Object-oriented modeling has been proven powerful and efficient in simulating natural behavior. This research presents a way to model water shortage management using the OO approach. Three groups of conceptual components using the OO approach are designed for the management model. The first group encompasses evaluation of natural behaviors and possible related management options. This evaluation includes assessing any discrepancy that might exist between water demand and supply. The second group is for decision making which includes the determination of water use cutback amount and duration using established criteria. The third group is for implementation of the management options which are restrictions of water usage at a local or regional scale. The loop is closed through a feedback mechanism where continuity in the time domain is established. Like many other regions, drought management is very important in south Florida. The Regional Simulation Model (RSM) is a finite volume, fully integrated hydrologic model used by the South Florida Water Management District to evaluate regional response to various planning alternatives including drought management. A trigger module was developed for RSM that encapsulates the OO approach to water shortage management. Rigorous testing of the module was performed using historical south Florida conditions. Keywords: Object-oriented, modeling, water shortage management, trigger module, Regional Simulation Model

  17. A simplified GIS approach to modeling global leaf water isoscapes.

    Directory of Open Access Journals (Sweden)

    Jason B West

    Full Text Available The stable hydrogen (delta(2H and oxygen (delta(18O isotope ratios of organic and inorganic materials record biological and physical processes through the effects of substrate isotopic composition and fractionations that occur as reactions proceed. At large scales, these processes can exhibit spatial predictability because of the effects of coherent climatic patterns over the Earth's surface. Attempts to model spatial variation in the stable isotope ratios of water have been made for decades. Leaf water has a particular importance for some applications, including plant organic materials that record spatial and temporal climate variability and that may be a source of food for migrating animals. It is also an important source of the variability in the isotopic composition of atmospheric gases. Although efforts to model global-scale leaf water isotope ratio spatial variation have been made (especially of delta(18O, significant uncertainty remains in models and their execution across spatial domains. We introduce here a Geographic Information System (GIS approach to the generation of global, spatially-explicit isotope landscapes (= isoscapes of "climate normal" leaf water isotope ratios. We evaluate the approach and the resulting products by comparison with simulation model outputs and point measurements, where obtainable, over the Earth's surface. The isoscapes were generated using biophysical models of isotope fractionation and spatially continuous precipitation isotope and climate layers as input model drivers. Leaf water delta(18O isoscapes produced here generally agreed with latitudinal averages from GCM/biophysical model products, as well as mean values from point measurements. These results show global-scale spatial coherence in leaf water isotope ratios, similar to that observed for precipitation and validate the GIS approach to modeling leaf water isotopes. These results demonstrate that relatively simple models of leaf water enrichment

  18. Botswana water and surface energy balance research program. Part 1: Integrated approach and field campaign results

    Science.gov (United States)

    Vandegriend, A. A.; Owe, M.; Vugts, H. F.; Ramothwa, G. K.

    1992-01-01

    The Botswana water and surface energy balance research program was developed to study and evaluate the integrated use of multispectral satellite remote sensing for monitoring the hydrological status of the Earth's surface. Results of the first part of the program (Botswana 1) which ran from 1 Jan. 1988 - 31 Dec. 1990 are summarized. Botswana 1 consisted of two major, mutually related components: a surface energy balance modeling component, built around an extensive field campaign; and a passive microwave research component which consisted of a retrospective study of large scale moisture conditions and Nimbus scanning multichannel microwave radiometer microwave signatures. The integrated approach of both components in general are described and activities performed during the surface energy modeling component including the extensive field campaign are summarized. The results of the passive microwave component are summarized. The key of the field campaign was a multilevel approach, whereby measurements by various similar sensors were made at several altitudes and resolution. Data collection was performed at two adjacent sites of contrasting surface character. The following measurements were made: micrometeorological measurements, surface temperatures, soil temperatures, soil moisture, vegetation (leaf area index and biomass), satellite data, aircraft data, atmospheric soundings, stomatal resistance, and surface emissivity.

  19. Kinetic equations modelling wealth redistribution: a comparison of approaches.

    Science.gov (United States)

    Düring, Bertram; Matthes, Daniel; Toscani, Giuseppe

    2008-11-01

    Kinetic equations modelling the redistribution of wealth in simple market economies is one of the major topics in the field of econophysics. We present a unifying approach to the qualitative study for a large variety of such models, which is based on a moment analysis in the related homogeneous Boltzmann equation, and on the use of suitable metrics for probability measures. In consequence, we are able to classify the most important feature of the steady wealth distribution, namely the fatness of the Pareto tail, and the dynamical stability of the latter in terms of the model parameters. Our results apply, e.g., to the market model with risky investments [S. Cordier, L. Pareschi, and G. Toscani, J. Stat. Phys. 120, 253 (2005)], and to the model with quenched saving propensities [A. Chatterjee, B. K. Chakrabarti, and S. S. Manna, Physica A 335, 155 (2004)]. Also, we present results from numerical experiments that confirm the theoretical predictions.

  20. A relaxation-based approach to damage modeling

    Science.gov (United States)

    Junker, Philipp; Schwarz, Stephan; Makowski, Jerzy; Hackl, Klaus

    2017-01-01

    Material models, including softening effects due to, for example, damage and localizations, share the problem of ill-posed boundary value problems that yield mesh-dependent finite element results. It is thus necessary to apply regularization techniques that couple local behavior described, for example, by internal variables, at a spatial level. This can take account of the gradient of the internal variable to yield mesh-independent finite element results. In this paper, we present a new approach to damage modeling that does not use common field functions, inclusion of gradients or complex integration techniques: Appropriate modifications of the relaxed (condensed) energy hold the same advantage as other methods, but with much less numerical effort. We start with the theoretical derivation and then discuss the numerical treatment. Finally, we present finite element results that prove empirically how the new approach works.

  1. A consortium approach to glass furnace modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.-L.; Golchert, B.; Petrick, M.

    1999-04-20

    Using computational fluid dynamics to model a glass furnace is a difficult task for any one glass company, laboratory, or university to accomplish. The task of building a computational model of the furnace requires knowledge and experience in modeling two dissimilar regimes (the combustion space and the liquid glass bath), along with the skill necessary to couple these two regimes. Also, a detailed set of experimental data is needed in order to evaluate the output of the code to ensure that the code is providing proper results. Since all these diverse skills are not present in any one research institution, a consortium was formed between Argonne National Laboratory, Purdue University, Mississippi State University, and five glass companies in order to marshal these skills into one three-year program. The objective of this program is to develop a fully coupled, validated simulation of a glass melting furnace that may be used by industry to optimize the performance of existing furnaces.

  2. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workilow models using ~-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS ( Labeled Transition Semantics) semantics of π-calculus. The main advantage of the worktlow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of worktlow models can be checked thlx)ugh weak bisimulation theorem in the π-caleulus, thus facilitating the optimizationof business processes.

  3. Approach for workflow modeling using π-calculus

    Institute of Scientific and Technical Information of China (English)

    杨东; 张申生

    2003-01-01

    As a variant of process algebra, π-calculus can describe the interactions between evolving processes. By modeling activity as a process interacting with other processes through ports, this paper presents a new approach: representing workflow models using π-calculus. As a result, the model can characterize the dynamic behaviors of the workflow process in terms of the LTS (Labeled Transition Semantics) semantics of π-calculus. The main advantage of the workflow model's formal semantic is that it allows for verification of the model's properties, such as deadlock-free and normal termination. Moreover, the equivalence of workflow models can be checked through weak bisimulation theorem in the π-calculus, thus facilitating the optimization of business processes.

  4. A contemporary approach to the problem of determining physical parameters according to the results of measurements

    Science.gov (United States)

    Elyasberg, P. Y.

    1979-01-01

    The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.

  5. A Bayesian modeling approach for generalized semiparametric structural equation models.

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Cai, Jing-Heng; Ip, Edward Hak-Sing

    2013-10-01

    In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types-continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.

  6. Comparative flood damage model assessment: towards a European approach

    Directory of Open Access Journals (Sweden)

    B. Jongman

    2012-12-01

    Full Text Available There is a wide variety of flood damage models in use internationally, differing substantially in their approaches and economic estimates. Since these models are being used more and more as a basis for investment and planning decisions on an increasingly large scale, there is a need to reduce the uncertainties involved and develop a harmonised European approach, in particular with respect to the EU Flood Risks Directive. In this paper we present a qualitative and quantitative assessment of seven flood damage models, using two case studies of past flood events in Germany and the United Kingdom. The qualitative analysis shows that modelling approaches vary strongly, and that current methodologies for estimating infrastructural damage are not as well developed as methodologies for the estimation of damage to buildings. The quantitative results show that the model outcomes are very sensitive to uncertainty in both vulnerability (i.e. depth–damage functions and exposure (i.e. asset values, whereby the first has a larger effect than the latter. We conclude that care needs to be taken when using aggregated land use data for flood risk assessment, and that it is essential to adjust asset values to the regional economic situation and property characteristics. We call for the development of a flexible but consistent European framework that applies best practice from existing models while providing room for including necessary regional adjustments.

  7. Ionization coefficient approach to modeling breakdown in nonuniform geometries.

    Energy Technology Data Exchange (ETDEWEB)

    Warne, Larry Kevin; Jorgenson, Roy Eberhardt; Nicolaysen, Scott D.

    2003-11-01

    This report summarizes the work on breakdown modeling in nonuniform geometries by the ionization coefficient approach. Included are: (1) fits to primary and secondary ionization coefficients used in the modeling; (2) analytical test cases for sphere-to-sphere, wire-to-wire, corner, coaxial, and rod-to-plane geometries; a compilation of experimental data with source references; comparisons between code results, test case results, and experimental data. A simple criterion is proposed to differentiate between corona and spark. The effect of a dielectric surface on avalanche growth is examined by means of Monte Carlo simulations. The presence of a clean dry surface does not appear to enhance growth.

  8. Impact Flash Physics: Modeling and Comparisons With Experimental Results

    Science.gov (United States)

    Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.

    2015-12-01

    Hypervelocity impacts frequently generate an observable "flash" of light with two components: a short-duration spike due to emissions from vaporized material, and a long-duration peak due to thermal emissions from expanding hot debris. The intensity and duration of these peaks depend on the impact velocity, angle, and the target and projectile mass and composition. Thus remote sensing measurements of planetary impact flashes have the potential to constrain the properties of impacting meteors and improve our understanding of impact flux and cratering processes. Interpreting impact flash measurements requires a thorough understanding of how flash characteristics correlate with impact conditions. Because planetary-scale impacts cannot be replicated in the laboratory, numerical simulations are needed to provide this insight for the solar system. Computational hydrocodes can produce detailed simulations of the impact process, but they lack the radiation physics required to model the optical flash. The Johns Hopkins University Applied Physics Laboratory (APL) developed a model to calculate the optical signature from the hot debris cloud produced by an impact. While the phenomenology of the optical signature is understood, the details required to accurately model it are complicated by uncertainties in material and optical properties and the simplifications required to numerically model radiation from large-scale impacts. Comparisons with laboratory impact experiments allow us to validate our approach and to draw insight regarding processes that occur at all scales in impact events, such as melt generation. We used Sandia National Lab's CTH shock physics hydrocode along with the optical signature model developed at APL to compare with a series of laboratory experiments conducted at the NASA Ames Vertical Gun Range. The experiments used Pyrex projectiles to impact pumice powder targets with velocities ranging from 1 to 6 km/s at angles of 30 and 90 degrees with respect to

  9. On a Markovian approach for modeling passive solar devices

    Energy Technology Data Exchange (ETDEWEB)

    Bottazzi, F.; Liebling, T.M. (Chaire de Recherche Operationelle, Ecole Polytechnique Federale de Lausanne (Switzerland)); Scartezzini, J.L.; Nygaard-Ferguson, M. (Lab. d' Energie Solaire et de Physique du Batiment, Ecole Polytechnique Federale de Lausanne (Switzerland))

    1991-01-01

    Stochastic models for the analysis of the energy and thermal comfort performances of passive solar devices have been increasingly studied for over a decade. A new approach to thermal building modeling, based on Markov chains, is proposed here to combine both the accuracy of traditional dynamic simulation with the practical advantages of simplified methods. A main difficulty of the Markovian approach is the discretization of the system variables. Efficient procedures have been developed to carry out this discretization and several numerical experiments have been performed to analyze the possibilities and limitations of the Markovian model. Despite its restrictive assumptions, it will be shown that accurate results are indeed obtained by this method. However, due to discretization, computer memory reqirements are more than inversely proportional to accuracy. (orig.).

  10. The Generalised Ecosystem Modelling Approach in Radiological Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard

    2008-03-15

    An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment

  11. Integration models: multicultural and liberal approaches confronted

    Science.gov (United States)

    Janicki, Wojciech

    2012-01-01

    European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.

  12. ON SOME APPROACHES TO ECONOMICMATHEMATICAL MODELING OF SMALL BUSINESS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-04-01

    Full Text Available Small business is an important part of modern Russian economy. We give a wide panorama developed by us of possible approaches to the construction of economic-mathematical models that may be useful to describe the dynamics of small businesses, as well as management. As for the description of certain problems of small business can use a variety of types of economic-mathematical and econometric models, we found it useful to consider a fairly wide range of such models, which resulted in quite a short description of the specific models. In this description of the models brought to such a level that an experienced professional in the field of economic-mathematical modeling could, if necessary, to develop their own specific model to the stage of design formulas and numerical results. Particular attention is paid to the use of statistical methods of non-numeric data, the most pressing at the moment. Are considered the problems of economic-mathematical modeling in solving problems of small business marketing. We have accumulated some experience in application of the methodology of economic-mathematical modeling in solving practical problems in small business marketing, in particular in the field of consumer goods and industrial purposes, educational services, as well as in the analysis and modeling of inflation, taxation and others. In marketing models of decision making theory we apply rankings and ratings. Is considered the problem of comparing averages. We present some models of the life cycle of small businesses - flow model projects, model of capture niches, and model of niche selection. We discuss the development of research on economic-mathematical modeling of small businesses

  13. Development of a computationally efficient urban modeling approach

    DEFF Research Database (Denmark)

    Wolfs, Vincent; Murla, Damian; Ntegeka, Victor

    2016-01-01

    This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...

  14. Implementing a stepped-care approach in primary care: results of a qualitative study

    Directory of Open Access Journals (Sweden)

    Franx Gerdien

    2012-01-01

    Full Text Available Abstract Background Since 2004, 'stepped-care models' have been adopted in several international evidence-based clinical guidelines to guide clinicians in the organisation of depression care. To enhance the adoption of this new treatment approach, a Quality Improvement Collaborative (QIC was initiated in the Netherlands. Methods Alongside the QIC, an intervention study using a controlled before-and-after design was performed. Part of the study was a process evaluation, utilizing semi-structured group interviews, to provide insight into the perceptions of the participating clinicians on the implementation of stepped care for depression into their daily routines. Participants were primary care clinicians, specialist clinicians, and other healthcare staff from eight regions in the Netherlands. Analysis was supported by the Normalisation Process Theory (NPT. Results The introduction of a stepped-care model for depression to primary care teams within the context of a depression QIC was generally well received by participating clinicians. All three elements of the proposed stepped-care model (patient differentiation, stepped-care treatment, and outcome monitoring, were translated and introduced locally. Clinicians reported changes in terms of learning how to differentiate between patient groups and different levels of care, changing antidepressant prescribing routines as a consequence of having a broader treatment package to offer to their patients, and better working relationships with patients and colleagues. A complex range of factors influenced the implementation process. Facilitating factors were the stepped-care model itself, the structured team meetings (part of the QIC method, and the positive reaction from patients to stepped care. The differing views of depression and depression care within multidisciplinary health teams, lack of resources, and poor information systems hindered the rapid introduction of the stepped-care model. The NPT

  15. Numerical Results of 3-D Modeling of Moon Accumulation

    Science.gov (United States)

    Khachay, Yurie; Anfilogov, Vsevolod; Antipin, Alexandr

    2014-05-01

    For the last time for the model of the Moon usually had been used the model of mega impact in which the forming of the Earth and its sputnik had been the consequence of the Earth's collision with the body of Mercurial mass. But all dynamical models of the Earth's accumulation and the estimations after the Pb-Pb system, lead to the conclusion that the duration of the planet accumulation was about 1 milliard years. But isotopic results after the W-Hf system testify about a very early (5-10) million years, dividing of the geochemical reservoirs of the core and mantle. In [1,2] it is shown, that the account of energy dissipating by the decay of short living radioactive elements and first of all Al26,it is sufficient for heating even small bodies with dimensions about (50-100) km up to the iron melting temperature and can be realized a principal new differentiation mechanism. The inner parts of the melted preplanets can join and they are mainly of iron content, but the cold silicate fragments return to the supply zone and additionally change the content of Moon forming to silicates. Only after the increasing of the gravitational radius of the Earth, the growing area of the future Earth's core can save also the silicate envelope fragments [3]. For understanding the further system Earth-Moon evolution it is significant to trace the origin and evolution of heterogeneities, which occur on its accumulation stage.In that paper we are modeling the changing of temperature,pressure,velocity of matter flowing in a block of 3d spherical body with a growing radius. The boundary problem is solved by the finite-difference method for the system of equations, which include equations which describe the process of accumulation, the Safronov equation, the equation of impulse balance, equation Navier-Stocks, equation for above litho static pressure and heat conductivity in velocity-pressure variables using the Businesque approach.The numerical algorithm of the problem solution in velocity

  16. A participatory modelling approach to developing a numerical sediment dynamics model

    Science.gov (United States)

    Jones, Nicholas; McEwen, Lindsey; Parker, Chris; Staddon, Chad

    2016-04-01

    Fluvial geomorphology is recognised as an important consideration in policy and legislation in the management of river catchments. Despite this recognition, limited knowledge exchange occurs between scientific researchers and river management practitioners. An example of this can be found within the limited uptake of numerical models of sediment dynamics by river management practitioners in the United Kingdom. The uptake of these models amongst the applied community is important as they have the potential to articulate how, at the catchment-scale, the impacts of management strategies of land-use change affect sediment dynamics and resulting channel quality. This paper describes and evaluates a new approach which involves river management stakeholders in an iterative and reflexive participatory modelling process. The aim of this approach was to create an environment for knowledge exchange between the stakeholders and the research team in the process of co-constructing a model. This process adopted a multiple case study approach, involving four groups of river catchment stakeholders in the United Kingdom. These stakeholder groups were involved in several stages of the participatory modelling process including: requirements analysis, model design, model development, and model evaluation. Stakeholders have provided input into a number of aspects of the modelling process, such as: data requirements, user interface, modelled processes, model assumptions, model applications, and model outputs. This paper will reflect on this process, in particular: the innovative methods used, data generated, and lessons learnt.

  17. Value Delivery Architecture Modeling – A New Approach for Business Modeling

    Directory of Open Access Journals (Sweden)

    Joachim Metzger

    2015-08-01

    Full Text Available Complexity and uncertainty have evolved as important challenges for entrepreneurship in many industries. Value Delivery Architecture Modeling (VDAM is a proposal for a new approach for business modeling to conquer these challenges. In addition to the creation of transparency and clarity, our approach supports the operationalization of business model ideas. VDAM is based on the combination of a new business modeling language called VDML, ontology building, and the implementation of a level of cross-company abstraction. The application of our new approach in the area of electric mobility in Germany, an industry sector with high levels of uncertainty and a lack of common understanding, shows several promising results: VDAM enables the development of an unambiguous and unbiased view on value creation. Additionally it allows for several applications leading to a more informed decision towards the implementation of new business models.

  18. Polynomial Chaos Expansion Approach to Interest Rate Models

    Directory of Open Access Journals (Sweden)

    Luca Di Persio

    2015-01-01

    Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.

  19. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  20. A Conditional Approach to Panel Data Models with Common Shocks

    Directory of Open Access Journals (Sweden)

    Giovanni Forchini

    2016-01-01

    Full Text Available This paper studies the effects of common shocks on the OLS estimators of the slopes’ parameters in linear panel data models. The shocks are assumed to affect both the errors and some of the explanatory variables. In contrast to existing approaches, which rely on using results on martingale difference sequences, our method relies on conditional strong laws of large numbers and conditional central limit theorems for conditionally-heterogeneous random variables.

  1. A systemic approach for modeling biological evolution using Parallel DEVS.

    Science.gov (United States)

    Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo

    2015-08-01

    A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. A validated approach for modeling collapse of steel structures

    Science.gov (United States)

    Saykin, Vitaliy Victorovich

    A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are

  3. GEOSPATIAL MODELLING APPROACH FOR 3D URBAN DENSIFICATION DEVELOPMENTS

    Directory of Open Access Journals (Sweden)

    O. Koziatek

    2016-06-01

    Full Text Available With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D. The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE, and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI’s CityEngine software and the Computer Generated Architecture (CGA language.

  4. Geospatial Modelling Approach for 3d Urban Densification Developments

    Science.gov (United States)

    Koziatek, O.; Dragićević, S.; Li, S.

    2016-06-01

    With growing populations, economic pressures, and the need for sustainable practices, many urban regions are rapidly densifying developments in the vertical built dimension with mid- and high-rise buildings. The location of these buildings can be projected based on key factors that are attractive to urban planners, developers, and potential buyers. Current research in this area includes various modelling approaches, such as cellular automata and agent-based modelling, but the results are mostly linked to raster grids as the smallest spatial units that operate in two spatial dimensions. Therefore, the objective of this research is to develop a geospatial model that operates on irregular spatial tessellations to model mid- and high-rise buildings in three spatial dimensions (3D). The proposed model is based on the integration of GIS, fuzzy multi-criteria evaluation (MCE), and 3D GIS-based procedural modelling. Part of the City of Surrey, within the Metro Vancouver Region, Canada, has been used to present the simulations of the generated 3D building objects. The proposed 3D modelling approach was developed using ESRI's CityEngine software and the Computer Generated Architecture (CGA) language.

  5. Results-Based Guidance: A Systems Approach to Student Support Programs.

    Science.gov (United States)

    Johnson, Sharon; Johnson, Clarence D.

    2003-01-01

    Describes results-based guidance, a systems approach to student support programs. Outlines factors that have contributed to the changes in the duties and responsibilities of guidance counselors and the basis of the competency-based approach in guidance counseling. Details elements of the results-based guidance program. (GCP)

  6. Connectivity of channelized reservoirs: a modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Larue, David K. [ChevronTexaco, Bakersfield, CA (United States); Hovadik, Joseph [ChevronTexaco, San Ramon, CA (United States)

    2006-07-01

    Connectivity represents one of the fundamental properties of a reservoir that directly affects recovery. If a portion of the reservoir is not connected to a well, it cannot be drained. Geobody or sandbody connectivity is defined as the percentage of the reservoir that is connected, and reservoir connectivity is defined as the percentage of the reservoir that is connected to wells. Previous studies have mostly considered mathematical, physical and engineering aspects of connectivity. In the current study, the stratigraphy of connectivity is characterized using simple, 3D geostatistical models. Based on these modelling studies, stratigraphic connectivity is good, usually greater than 90%, if the net: gross ratio, or sand fraction, is greater than about 30%. At net: gross values less than 30%, there is a rapid diminishment of connectivity as a function of net: gross. This behaviour between net: gross and connectivity defines a characteristic 'S-curve', in which the connectivity is high for net: gross values above 30%, then diminishes rapidly and approaches 0. Well configuration factors that can influence reservoir connectivity are well density, well orientation (vertical or horizontal; horizontal parallel to channels or perpendicular) and length of completion zones. Reservoir connectivity as a function of net: gross can be improved by several factors: presence of overbank sandy facies, deposition of channels in a channel belt, deposition of channels with high width/thickness ratios, and deposition of channels during variable floodplain aggradation rates. Connectivity can be reduced substantially in two-dimensional reservoirs, in map view or in cross-section, by volume support effects and by stratigraphic heterogeneities. It is well known that in two dimensions, the cascade zone for the 'S-curve' of net: gross plotted against connectivity occurs at about 60% net: gross. Generalizing this knowledge, any time that a reservoir can be regarded as &apos

  7. Drifting model approach to modeling based on weighted support vector machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 宋春林; 邵惠鹤

    2004-01-01

    This paper proposes a novel drifting modeling (DM) method. Briefly, we first employ an improved SVMs algorithm named weighted support vector machines (W_SVMs), which is suitable for locally learning, and then the DM method using the algorithm is proposed. By applying the proposed modeling method to Fluidized Catalytic Cracking Unit (FCCU), the simulation results show that the property of this proposed approach is superior to global modeling method based on standard SVMs.

  8. ISM Approach to Model Offshore Outsourcing Risks

    Directory of Open Access Journals (Sweden)

    Sunand Kumar

    2014-07-01

    Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing.  To this effect, authors have identified various risks through extant review of literature.  From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled.  Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.

  9. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  10. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  11. Some results regarding the comparison of the Earth's atmospheric models

    Directory of Open Access Journals (Sweden)

    Šegan S.

    2005-01-01

    Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.

  12. Noether symmetry approach in f(R)-tachyon model

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Mubasher, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), H-12, Islamabad (Pakistan); Mahomed, F.M., E-mail: Fazal.Mahomed@wits.ac.za [Centre for Differential Equations, Continuum Mechanics and Applications, School of Computational and Applied Mathematics, University of the Witwatersrand, Wits 2050 (South Africa); Momeni, D., E-mail: d.momeni@yahoo.com [Department of Physics, Faculty of Sciences, Tarbiat Moa' llem University, Tehran (Iran, Islamic Republic of)

    2011-08-26

    In this Letter by utilizing the Noether symmetry approach in cosmology, we attempt to find the tachyon potential via the application of this kind of symmetry to a flat Friedmann-Robertson-Walker (FRW) metric. We reduce the system of equations to simpler ones and obtain the general class of the tachyon's potential function and f(R) functions. We have found that the Noether symmetric model results in a power law f(R) and an inverse fourth power potential for the tachyonic field. Further we investigate numerically the cosmological evolution of our model and show explicitly the behavior of the equation of state crossing the cosmological constant boundary.

  13. Complete Complimentary Results Report of the MARF's NLP Approach to the DEFT 2010 Competition

    CERN Document Server

    Mokhov, Serguei A

    2010-01-01

    This paper complements the main DEFT'10 article describing the MARF approach to the DEFT'10 NLP competition. This paper is aimed to present the complete result sets of all the conducted experiments and their settings in the resulting tables highlighting the approach and the best results, but also showing the worse and the worst and their analysis. This is the first iteration of the initial release of the results.

  14. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  15. The Effect of Bathymetric Filtering on Nearshore Process Model Results

    Science.gov (United States)

    2009-01-01

    Filtering on Nearshore Process Model Results 6. AUTHOR(S) Nathaniel Plant, Kacey L. Edwards, James M. Kaihatu, Jayaram Veeramony, Yuan-Huang L. Hsu...filtering on nearshore process model results Nathaniel G. Plant **, Kacey L Edwardsb, James M. Kaihatuc, Jayaram Veeramony b, Larry Hsu’’, K. Todd Holland...assimilation efforts that require this information. Published by Elsevier B.V. 1. Introduction Nearshore process models are capable of predicting

  16. Innovation Networks New Approaches in Modelling and Analyzing

    CERN Document Server

    Pyka, Andreas

    2009-01-01

    The science of graphs and networks has become by now a well-established tool for modelling and analyzing a variety of systems with a large number of interacting components. Starting from the physical sciences, applications have spread rapidly to the natural and social sciences, as well as to economics, and are now further extended, in this volume, to the concept of innovations, viewed broadly. In an abstract, systems-theoretical approach, innovation can be understood as a critical event which destabilizes the current state of the system, and results in a new process of self-organization leading to a new stable state. The contributions to this anthology address different aspects of the relationship between innovation and networks. The various chapters incorporate approaches in evolutionary economics, agent-based modeling, social network analysis and econophysics and explore the epistemic tension between insights into economics and society-related processes, and the insights into new forms of complex dynamics.

  17. A modal approach to modeling spatially distributed vibration energy dissipation.

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, Daniel Joseph

    2010-08-01

    The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

  18. Validation of models with constant bias: an applied approach

    Directory of Open Access Journals (Sweden)

    Salvador Medina-Peralta

    2014-06-01

    Full Text Available Objective. This paper presents extensions to the statistical validation method based on the procedure of Freese when a model shows constant bias (CB in its predictions and illustrate the method with data from a new mechanistic model that predict weight gain in cattle. Materials and methods. The extensions were the hypothesis tests and maximum anticipated error for the alternative approach, and the confidence interval for a quantile of the distribution of errors. Results. The model evaluated showed CB, once the CB is removed and with a confidence level of 95%, the magnitude of the error does not exceed 0.575 kg. Therefore, the validated model can be used to predict the daily weight gain of cattle, although it will require an adjustment in its structure based on the presence of CB to increase the accuracy of its forecasts. Conclusions. The confidence interval for the 1-α quantile of the distribution of errors after correcting the constant bias, allows determining the top limit for the magnitude of the error of prediction and use it to evaluate the evolution of the model in the forecasting of the system. The confidence interval approach to validate a model is more informative than the hypothesis tests for the same purpose.

  19. Approaches to learning as predictors of academic achievement: Results from a large scale, multi-level analysis

    DEFF Research Database (Denmark)

    Herrmann, Kim Jesper

    2017-01-01

    . Controlling for the effects of age, gender, and progression, we found that the students’ end-of-semester grade point averages were related negatively to a surface approach and positively to organised effort. Interestingly, the effect of the surface approach on academic achievement varied across programmes......The relationships between university students’ academic achievement and their approaches to learning and studying continuously attract scholarly attention. We report the results of an analysis in which multilevel linear modelling was used to analyse data from 3,626 Danish university students...

  20. Parameter identification and global sensitivity analysis of Xinanjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng SONG

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters’ sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  1. Development of a decision support system for residential construction using panellised walls: approach and preliminary results.

    Science.gov (United States)

    Nussbaum, Maury A; Shewchuk, John P; Kim, Sunwook; Seol, Hyang; Guo, Cheng

    2009-01-01

    There is a high prevalence of work-related musculoskeletal disorders (WMSDs) among residential construction workers, yet control in this industry can be difficult for a number of reasons. A decision support system (DSS) is described here to allow early assessment of both ergonomic and productivity concerns, specifically by designers. Construction using prefabricated walls (panels) is the focus of current DSS development and is based conceptually on an existing 'Safety in Construction Design' model. A stepwise description of the development process is provided, including input from end users, taxonomy development and task analysis, construction worker input, detailed laboratory-based simulations and modelling/solution approaches and implementation. Preliminary results are presented for several steps. These results suggest that construction activities using panels can be efficiently represented, that some of these activities involve exposure to high levels of WMSD risk and that several assumptions are required to allow for ease of mathematical and computational implementation of the DSS. Successful development of such tools, which allow for proactive control of exposures, is argued as having substantial potential benefit.

  2. Agribusiness model approach to territorial food development

    Directory of Open Access Journals (Sweden)

    Murcia Hector Horacio

    2011-04-01

    Full Text Available

    Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.

  3. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  4. A reservoir simulation approach for modeling of naturally fractured reservoirs

    Directory of Open Access Journals (Sweden)

    H. Mohammadi

    2012-12-01

    Full Text Available In this investigation, the Warren and Root model proposed for the simulation of naturally fractured reservoir was improved. A reservoir simulation approach was used to develop a 2D model of a synthetic oil reservoir. Main rock properties of each gridblock were defined for two different types of gridblocks called matrix and fracture gridblocks. These two gridblocks were different in porosity and permeability values which were higher for fracture gridblocks compared to the matrix gridblocks. This model was solved using the implicit finite difference method. Results showed an improvement in the Warren and Root model especially in region 2 of the semilog plot of pressure drop versus time, which indicated a linear transition zone with no inflection point as predicted by other investigators. Effects of fracture spacing, fracture permeability, fracture porosity, matrix permeability and matrix porosity on the behavior of a typical naturally fractured reservoir were also presented.

  5. A Model Independent Approach to (p)Reheating

    CERN Document Server

    Özsoy, Ogan; Sinha, Kuver; Watson, Scott

    2015-01-01

    In this note we propose a model independent framework for inflationary (p)reheating. Our approach is analogous to the Effective Field Theory of Inflation, however here the inflaton oscillations provide an additional source of (discrete) symmetry breaking. Using the Goldstone field that non-linearly realizes time diffeormorphism invariance we construct a model independent action for both the inflaton and reheating sectors. Utilizing the hierarchy of scales present during the reheating process we are able to recover known results in the literature in a simpler fashion, including the presence of oscillations in the primordial power spectrum. We also construct a class of models where the shift symmetry of the inflaton is preserved during reheating, which helps alleviate past criticisms of (p)reheating in models of Natural Inflation. Extensions of our framework suggest the possibility of analytically investigating non-linear effects (such as rescattering and back-reaction) during thermalization without resorting t...

  6. Oscillation threshold of a clarinet model: a numerical continuation approach

    CERN Document Server

    Karkar, Sami; Cochelin, Bruno; 10.1121/1.3651231

    2012-01-01

    This paper focuses on the oscillation threshold of single reed instruments. Several characteristics such as blowing pressure at threshold, regime selection, and playing frequency are known to change radically when taking into account the reed dynamics and the flow induced by the reed motion. Previous works have shown interesting tendencies, using analytical expressions with simplified models. In the present study, a more elaborated physical model is considered. The influence of several parameters, depending on the reed properties, the design of the instrument or the control operated by the player, are studied. Previous results on the influence of the reed resonance frequency are confirmed. New results concerning the simultaneous influence of two model parameters on oscillation threshold, regime selection and playing frequency are presented and discussed. The authors use a numerical continuation approach. Numerical continuation consists in following a given solution of a set of equations when a parameter varie...

  7. An integrated modelling approach to estimate urban traffic emissions

    Science.gov (United States)

    Misra, Aarshabh; Roorda, Matthew J.; MacLean, Heather L.

    2013-07-01

    An integrated modelling approach is adopted to estimate microscale urban traffic emissions. The modelling framework consists of a traffic microsimulation model developed in PARAMICS, a microscopic emissions model (Comprehensive Modal Emissions Model), and two dispersion models, AERMOD and the Quick Urban and Industrial Complex (QUIC). This framework is applied to a traffic network in downtown Toronto, Canada to evaluate summer time morning peak traffic emissions of carbon monoxide (CO) and nitrogen oxides (NOx) during five weekdays at a traffic intersection. The model predicted results are validated against sensor observations with 100% of the AERMOD modelled CO concentrations and 97.5% of the QUIC modelled NOx concentrations within a factor of two of the corresponding observed concentrations. Availability of local estimates of ambient concentration is useful for accurate comparisons of predicted concentrations with observed concentrations. Predicted and sensor measured concentrations are significantly lower than the hourly threshold Maximum Acceptable Levels for CO (31 ppm, ˜90 times lower) and NO2 (0.4 mg/m3, ˜12 times lower), within the National Ambient Air Quality Objectives established by Environment Canada.

  8. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  9. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  10. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  11. Well test analysis results interpretation: Combined type curve and pressure derivative approach

    Energy Technology Data Exchange (ETDEWEB)

    Fabbri, P.; Matteotti, G. (Padua Univ. (Italy). Dip. di Geologia, Paleontologia e Geofisica Padua Univ. (Italy). Ist. di Idraulica)

    In reviewing theoretical concepts forming the basis for the interpretation of well test analyses, this paper focusses on the 'theoretical model' for the determination of the parameters and variables. It then applies this theory to the combined type curve and pressure derivative interpretation approaches. Finally, the paper illustrates an approach combining the combined type curve and pressure derivative methods for homogeneous and isotropic conditions in a thermal aquifer and in the presence of the skin effect and wellbore storage.

  12. Building Energy Modeling: A Data-Driven Approach

    Science.gov (United States)

    Cui, Can

    Buildings consume nearly 50% of the total energy in the United States, which drives the need to develop high-fidelity models for building energy systems. Extensive methods and techniques have been developed, studied, and applied to building energy simulation and forecasting, while most of work have focused on developing dedicated modeling approach for generic buildings. In this study, an integrated computationally efficient and high-fidelity building energy modeling framework is proposed, with the concentration on developing a generalized modeling approach for various types of buildings. First, a number of data-driven simulation models are reviewed and assessed on various types of computationally expensive simulation problems. Motivated by the conclusion that no model outperforms others if amortized over diverse problems, a meta-learning based recommendation system for data-driven simulation modeling is proposed. To test the feasibility of the proposed framework on the building energy system, an extended application of the recommendation system for short-term building energy forecasting is deployed on various buildings. Finally, Kalman filter-based data fusion technique is incorporated into the building recommendation system for on-line energy forecasting. Data fusion enables model calibration to update the state estimation in real-time, which filters out the noise and renders more accurate energy forecast. The framework is composed of two modules: off-line model recommendation module and on-line model calibration module. Specifically, the off-line model recommendation module includes 6 widely used data-driven simulation models, which are ranked by meta-learning recommendation system for off-line energy modeling on a given building scenario. Only a selective set of building physical and operational characteristic features is needed to complete the recommendation task. The on-line calibration module effectively addresses system uncertainties, where data fusion on

  13. Strengthening the Focus on Business Results: The Need for Systems Approaches in Organizational Behavior Management

    Science.gov (United States)

    Hyten, Cloyd

    2009-01-01

    Current Organizational Behavior Management (OBM) research and practice may be characterized as either behavior focused or results focused. These two approaches stem from different origins and have different characteristics. The behavior-focused approach stems from applied behavior analysis (ABA) methods and emphasizes direct observation of and…

  14. The Bayesian approximation error approach for electrical impedance tomography—experimental results

    Science.gov (United States)

    Nissinen, A.; Heikkinen, L. M.; Kaipio, J. P.

    2008-01-01

    Inverse problems can be characterized as problems that tolerate measurement and modelling errors poorly. While the measurement error issue has been widely considered as a solved problem, the modelling errors have remained largely untreated. The approximation and modelling errors can, however, be argued to dominate the measurement errors in most applications. There are several applications in which the temporal and memory requirements dictate that the computational complexity of the forward solver be radically reduced. For example, in process tomography the reconstructions have to be carried out typically in a few tens of milliseconds. Recently, a Bayesian approach for the treatment of approximation and modelling errors for inverse problems has been proposed. This approach has proven to work well in several classes of problems, but the approach has not been verified in any problem with real data. In this paper, we study two different types of modelling errors in the case of electrical impedance tomography: one related to model reduction and one concerning partially unknown geometry. We show that the approach is also feasible in practice and may facilitate the reduction of the computational complexity of the nonlinear EIT problem at least by an order of magnitude.

  15. Results from a new Cocks-Ashby style porosity model

    Science.gov (United States)

    Barton, Nathan

    2017-01-01

    A new porosity evolution model is described, along with preliminary results. The formulation makes use of a Cocks-Ashby style treatment of porosity kinetics that includes rate dependent flow in the mechanics of porosity growth. The porosity model is implemented in a framework that allows for a variety of strength models to be used for the matrix material, including ones with significant changes in rate sensitivity as a function of strain rate. Results of the effect of changing strain rate sensitivity on porosity evolution are shown. The overall constitutive model update involves the coupled solution of a system of nonlinear equations.

  16. CFD Modeling of Wall Steam Condensation: Two-Phase Flow Approach versus Homogeneous Flow Approach

    Directory of Open Access Journals (Sweden)

    S. Mimouni

    2011-01-01

    Full Text Available The present work is focused on the condensation heat transfer that plays a dominant role in many accident scenarios postulated to occur in the containment of nuclear reactors. The study compares a general multiphase approach implemented in NEPTUNE_CFD with a homogeneous model, of widespread use for engineering studies, implemented in Code_Saturne. The model implemented in NEPTUNE_CFD assumes that liquid droplets form along the wall within nucleation sites. Vapor condensation on droplets makes them grow. Once the droplet diameter reaches a critical value, gravitational forces compensate surface tension force and then droplets slide over the wall and form a liquid film. This approach allows taking into account simultaneously the mechanical drift between the droplet and the gas, the heat and mass transfer on droplets in the core of the flow and the condensation/evaporation phenomena on the walls. As concern the homogeneous approach, the motion of the liquid film due to the gravitational forces is neglected, as well as the volume occupied by the liquid. Both condensation models and compressible procedures are validated and compared to experimental data provided by the TOSQAN ISP47 experiment (IRSN Saclay. Computational results compare favorably with experimental data, particularly for the Helium and steam volume fractions.

  17. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  18. Social Network Analysis and Nutritional Behavior: An Integrated Modeling Approach.

    Science.gov (United States)

    Senior, Alistair M; Lihoreau, Mathieu; Buhl, Jerome; Raubenheimer, David; Simpson, Stephen J

    2016-01-01

    Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent research combining state-space models of nutritional geometry with agent-based models (ABMs), show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit ABMs that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition). Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interactions in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  19. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  20. THE MODEL OF EXTERNSHIP ORGANIZATION FOR FUTURE TEACHERS: QUALIMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    Taisiya A. Isaeva

    2015-01-01

    Full Text Available The aim of the paper is to present author’s model for bachelors – future teachers of vocational training. The model is been worked out from the standpoint of qualimetric approach and provides a pedagogical training.Methods. The process is based on the literature analysis of externship organization for students in higher education and includes the SWOT-analysis techniques in pedagogical training. The method of group expert evaluation is the main method of pedagogical qualimetry. Structural components of professional pedagogical competency of students-future teachers are defined. It allows us to determine a development level and criterion of estimation on mastering programme «Vocational training (branch-wise».Results. This article interprets the concept «pedagogical training»; its basic organization principles during students’ practice are stated. The methods of expert group formation are presented: self-assessment and personal data.Scientific novelty. The externship organization model for future teachers is developed. This model is based on pedagogical training, using qualimetric approach and the SWOT-analysis techniques. Proposed criterion-assessment procedures are managed to determine the developing levels of professional and pedagogical competency.Practical significance. The model is introduced into pedagogical training of educational process of Kalashnikov’s Izhevsk State Technical University, and can be used in other similar educational establishments.

  1. Approaching the other: Investigation of a descriptive belief revision model

    Directory of Open Access Journals (Sweden)

    Spyridon Stelios

    2016-12-01

    Full Text Available When an individual—a hearer—is confronted with an opinion expressed by another individual—a speaker—differing from her only in terms of a degree of belief, how will she react? In trying to answer that question this paper reintroduces and investigates a descriptive belief revision model designed to measure approaches. Parameters of the model are the hearer’s credibility account of the speaker, the initial difference between the hearer’s and speaker’s degrees of belief, and the hearer’s resistance to change. Within an interdisciplinary framework, two empirical studies were conducted. A comparison was carried out between empirically recorded revisions and revisions according to the model. Results showed that the theoretical model is highly confirmed. An interesting finding is the measurement of an “unexplainable behaviour” that is not classified either as repulsion or as approach. At a second level of analysis, the model is compared to the Bayesian framework of inference. Structural differences and evidence for optimal descriptive adequacy of the former were highlighted.

  2. Numerical results for near surface time domain electromagnetic exploration: a full waveform approach

    Science.gov (United States)

    Sun, H.; Li, K.; Li, X., Sr.; Liu, Y., Sr.; Wen, J., Sr.

    2015-12-01

    Time domain or Transient electromagnetic (TEM) survey including types with airborne, semi-airborne and ground play important roles in applicants such as geological surveys, ground water/aquifer assess [Meju et al., 2000; Cox et al., 2010], metal ore exploration [Yang and Oldenburg, 2012], prediction of water bearing structures in tunnels [Xue et al., 2007; Sun et al., 2012], UXO exploration [Pasion et al., 2007; Gasperikova et al., 2009] etc. The common practice is introducing a current into a transmitting (Tx) loop and acquire the induced electromagnetic field after the current is cut off [Zhdanov and Keller, 1994]. The current waveforms are different depending on instruments. Rectangle is the most widely used excitation current source especially in ground TEM. Triangle and half sine are commonly used in airborne and semi-airborne TEM investigation. In most instruments, only the off time responses are acquired and used in later analysis and data inversion. Very few airborne instruments acquire the on time and off time responses together. Although these systems acquire the on time data, they usually do not use them in the interpretation.This abstract shows a novel full waveform time domain electromagnetic method and our recent modeling results. The benefits comes from our new algorithm in modeling full waveform time domain electromagnetic problems. We introduced the current density into the Maxwell's equation as the transmitting source. This approach allows arbitrary waveforms, such as triangle, half-sine, trapezoidal waves or scatter record from equipment, being used in modeling. Here, we simulate the establishing and induced diffusion process of the electromagnetic field in the earth. The traditional time domain electromagnetic with pure secondary fields can also be extracted from our modeling results. The real time responses excited by a loop source can be calculated using the algorithm. We analyze the full time gates responses of homogeneous half space and two

  3. ALREST High Fidelity Modeling Program Approach

    Science.gov (United States)

    2011-05-18

    Gases and Mixtures of Redlich - Kwong and Peng- Robinson Fluids Assumed pdf Model based on k- ε-g Model in NASA/LaRc Vulcan code Level Set model...Potential Attractiveness Of Liquid Hydrocarbon Engines For Boost Applications • Propensity Of Hydrocarbon Engines For Combustion Instability • Air

  4. Updated Results for the Wake Vortex Inverse Model

    Science.gov (United States)

    Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).

  5. LEXICAL APPROACH IN TEACHING TURKISH: A COLLOCATIONAL STUDY MODEL

    National Research Council Canada - National Science Library

    Eser ÖRDEM

    2013-01-01

    Abstract This study intends to propose Lexical Approach (Lewis, 1998, 2002; Harwood, 2002) and a model for teaching Turkish as a foreign language so that this model can be used in classroom settings...

  6. Virtuous organization: A structural equation modeling approach

    Directory of Open Access Journals (Sweden)

    Majid Zamahani

    2013-02-01

    Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.

  7. A model-based multisensor data fusion knowledge management approach

    Science.gov (United States)

    Straub, Jeremy

    2014-06-01

    A variety of approaches exist for combining data from multiple sensors. The model-based approach combines data based on its support for or refutation of elements of the model which in turn can be used to evaluate an experimental thesis. This paper presents a collection of algorithms for mapping various types of sensor data onto a thesis-based model and evaluating the truth or falsity of the thesis, based on the model. The use of this approach for autonomously arriving at findings and for prioritizing data are considered. Techniques for updating the model (instead of arriving at a true/false assertion) are also discussed.

  8. Wave-current interactions: model development and preliminary results

    Science.gov (United States)

    Mayet, Clement; Lyard, Florent; Ardhuin, Fabrice

    2013-04-01

    The coastal area concentrates many uses that require integrated management based on diagnostic and predictive tools to understand and anticipate the future of pollution from land or sea, and learn more about natural hazards at sea or activity on the coast. The realistic modelling of coastal hydrodynamics needs to take into account various processes which interact, including tides, surges, and sea state (Wolf [2008]). These processes act at different spatial scales. Unstructured-grid models have shown the ability to satisfy these needs, given that a good mesh resolution criterion is used. We worked on adding a sea state forcing in a hydrodynamic circulation model. The sea state model is the unstructured version of WAVEWATCH III c (Tolman [2008]) (which version is developed at IFREMER, Brest (Ardhuin et al. [2010]) ), and the hydrodynamic model is the 2D barotropic module of the unstructured-grid finite element model T-UGOm (Le Bars et al. [2010]). We chose to use the radiation stress approach (Longuet-Higgins and Stewart [1964]) to represent the effect of surface waves (wind waves and swell) in the barotropic model, as previously done by Mastenbroek et al. [1993]and others. We present here some validation of the model against academic cases : a 2D plane beach (Haas and Warner [2009]) and a simple bathymetric step with analytic solution for waves (Ardhuin et al. [2008]). In a second part we present realistic application in the Ushant Sea during extreme event. References Ardhuin, F., N. Rascle, and K. Belibassakis, Explicit wave-averaged primitive equations using a generalized Lagrangian mean, Ocean Modelling, 20 (1), 35-60, doi:10.1016/j.ocemod.2007.07.001, 2008. Ardhuin, F., et al., Semiempirical Dissipation Source Functions for Ocean Waves. Part I: Definition, Calibration, and Validation, J. Phys. Oceanogr., 40 (9), 1917-1941, doi:10.1175/2010JPO4324.1, 2010. Haas, K. A., and J. C. Warner, Comparing a quasi-3D to a full 3D nearshore circulation model: SHORECIRC and

  9. Comparison of two novel approaches to model fibre reinforced concrete

    NARCIS (Netherlands)

    Radtke, F.K.F.; Simone, A.; Sluys, L.J.

    2009-01-01

    We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity f

  10. Modelling the World Wool Market: A Hybrid Approach

    OpenAIRE

    2007-01-01

    We present a model of the world wool market that merges two modelling traditions: the partialequilibrium commodity-specific approach and the computable general-equilibrium approach. The model captures the multistage nature of the wool production system, and the heterogeneous nature of raw wool, processed wool and wool garments. It also captures the important wool producing and consuming regions of the world. We illustrate the utility of the model by estimating the effects of tariff barriers o...

  11. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  12. An algebraic approach to the Hubbard model

    CERN Document Server

    de Leeuw, Marius

    2015-01-01

    We study the algebraic structure of an integrable Hubbard-Shastry type lattice model associated with the centrally extended su(2|2) superalgebra. This superalgebra underlies Beisert's AdS/CFT worldsheet R-matrix and Shastry's R-matrix. The considered model specializes to the one-dimensional Hubbard model in a certain limit. We demonstrate that Yangian symmetries of the R-matrix specialize to the Yangian symmetry of the Hubbard model found by Korepin and Uglov. Moreover, we show that the Hubbard model Hamiltonian has an algebraic interpretation as the so-called secret symmetry. We also discuss Yangian symmetries of the A and B models introduced by Frolov and Quinn.

  13. Regularization of turbulence - a comprehensive modeling approach

    NARCIS (Netherlands)

    Geurts, Bernard J.

    2011-01-01

    Turbulence readily arises in numerous flows in nature and technology. The large number of degrees of freedom of turbulence poses serious challenges to numerical approaches aimed at simulating and controlling such flows. While the Navier-Stokes equations are commonly accepted to precisely describe fl

  14. Measuring equilibrium models: a multivariate approach

    Directory of Open Access Journals (Sweden)

    Nadji RAHMANIA

    2011-04-01

    Full Text Available This paper presents a multivariate methodology for obtaining measures of unobserved macroeconomic variables. The used procedure is the multivariate Hodrick-Prescot which depends on smoothing param eters. The choice of these parameters is crucial. Our approach is based on consistent estimators of these parameters, depending only on the observed data.

  15. A graphical approach to analogue behavioural modelling

    OpenAIRE

    Moser, Vincent; Nussbaum, Pascal; Amann, Hans-Peter; Astier, Luc; Pellandini, Fausto

    2007-01-01

    In order to master the growing complexity of analogue electronic systems, modelling and simulation of analogue hardware at various levels is absolutely necessary. This paper presents an original modelling method based on the graphical description of analogue electronic functional blocks. This method is intended to be automated and integrated into a design framework: specialists create behavioural models of existing functional blocks, that can then be used through high-level selection and spec...

  16. A geometrical approach to structural change modeling

    OpenAIRE

    Stijepic, Denis

    2013-01-01

    We propose a model for studying the dynamics of economic structures. The model is based on qualitative information regarding structural dynamics, in particular, (a) the information on the geometrical properties of trajectories (and their domains) which are studied in structural change theory and (b) the empirical information from stylized facts of structural change. We show that structural change is path-dependent in this model and use this fact to restrict the number of future structural cha...

  17. Forecasting wind-driven wildfires using an inverse modelling approach

    Directory of Open Access Journals (Sweden)

    O. Rios

    2013-12-01

    Full Text Available A technology able to rapidly forecast wildlfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the on-going fire. The article at hand presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and a forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the high capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event. This work opens the door to further advances framework and more sophisticated models while keeping the computational time suitable for operativeness.

  18. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  19. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  20. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  1. Modeling drug- and chemical- induced hepatotoxicity with systems biology approaches

    Directory of Open Access Journals (Sweden)

    Sudin eBhattacharya

    2012-12-01

    Full Text Available We provide an overview of computational systems biology approaches as applied to the study of chemical- and drug-induced toxicity. The concept of ‘toxicity pathways’ is described in the context of the 2007 US National Academies of Science report, Toxicity testing in the 21st Century: A Vision and A Strategy. Pathway mapping and modeling based on network biology concepts are a key component of the vision laid out in this report for a more biologically-based analysis of dose-response behavior and the safety of chemicals and drugs. We focus on toxicity of the liver (hepatotoxicity – a complex phenotypic response with contributions from a number of different cell types and biological processes. We describe three case studies of complementary multi-scale computational modeling approaches to understand perturbation of toxicity pathways in the human liver as a result of exposure to environmental contaminants and specific drugs. One approach involves development of a spatial, multicellular virtual tissue model of the liver lobule that combines molecular circuits in individual hepatocytes with cell-cell interactions and blood-mediated transport of toxicants through hepatic sinusoids, to enable quantitative, mechanistic prediction of hepatic dose-response for activation of the AhR toxicity pathway. Simultaneously, methods are being developing to extract quantitative maps of intracellular signaling and transcriptional regulatory networks perturbed by environmental contaminants, using a combination of gene expression and genome-wide protein-DNA interaction data. A predictive physiological model (DILIsymTM to understand drug-induced liver injury (DILI, the most common adverse event leading to termination of clinical development programs and regulatory actions on drugs, is also described. The model initially focuses on reactive metabolite-induced DILI in response to administration of acetaminophen, and spans multiple biological scales.

  2. Bayesian network approach for modeling local failure in lung cancer

    Science.gov (United States)

    Oh, Jung Hun; Craft, Jeffrey; Al-Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; Naqa, Issam El

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins’ role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which is comprised of clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogenous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients. PMID:21335651

  3. Implementing a stepped-care approach in primary care: results of a qualitative study.

    NARCIS (Netherlands)

    Franx, G.C.; Oud, M.; Lange, J.; Wensing, M.J.; Grol, R.P.

    2012-01-01

    BACKGROUND: Since 2004, 'stepped-care models' have been adopted in several international evidence-based clinical guidelines to guide clinicians in the organisation of depression care. To enhance the adoption of this new treatment approach, a Quality Improvement Collaborative (QIC) was initiated in

  4. Probabilistic modelling in urban drainage – two approaches that explicitly account for temporal variation of model errors

    DEFF Research Database (Denmark)

    Löwe, Roland; Del Giudice, Dario; Mikkelsen, Peter Steen

    to observations. After a brief discussion of the assumptions made for likelihood-based parameter inference, we illustrated the basic principles of both approaches on the example of sewer flow modelling with a conceptual rainfallrunoff model. The results from a real-world case study suggested that both approaches...

  5. The GBVP approach for vertical datum unification: recent results in North America

    Science.gov (United States)

    Amjadiparvar, B.; Rangelova, E.; Sideris, M. G.

    2016-01-01

    Two levelling-based vertical datums have been used in North America, namely CGVD28 in Canada and NAVD88 in the USA and Mexico. Although the two datums will be replaced by a common and continent-wide vertical datum in a few years, their connection and unification are of great interest to the scientific and user communities. In this paper, the geodetic boundary value problem (GBVP) approach is studied as a rigorous method for connecting two or more vertical datums through computed datum offsets from a global equipotential surface defined by a GOCE-based geoid. The so-called indirect bias term, the effect of the GOCE geoid omission error, the effect of the systematic levelling datum errors and distortions, and the effect of the geodetic data errors on the datum unification are four important factors affecting the practical implementation of this approach. These factors are investigated numerically using the GNSS-levelling and tide gauge stations in Canada, the USA, Alaska, and Mexico. The results show that the indirect bias term can be omitted if a GOCE-based global geopotential model is used in gravimetric geoid computations. The omission of the indirect bias term simplifies the linear system of equations for the estimation of the datum offset(s). Because of the existing systematic levelling errors and distortions in the Canadian and US levelling networks, the datum offsets are investigated in eight smaller regions along the Canadian and US coastal areas. Using GNSS-levelling stations in the US coastal regions, the mean datum offset can be estimated with a 1 cm standard deviation if the GOCE geoid omission error is taken into account by means of the local gravity and topographic information. In the Canadian Atlantic and Pacific regions, the datum offsets can be estimated with 2.3 and 3.5 cm standard deviation, respectively, using GNSS-levelling stations. However, due to the low number of tide gauge stations, the standard deviation of the CGVD28 and NAVD88 datum

  6. Nucleon Spin Content in a Relativistic Quark Potential Model Approach

    Institute of Scientific and Technical Information of China (English)

    DONG YuBing; FENG QingGuo

    2002-01-01

    Based on a relativistic quark model approach with an effective potential U(r) = (ac/2)(1 + γ0)r2, the spin content of the nucleon is investigated. Pseudo-scalar interaction between quarks and Goldstone bosons is employed to calculate the couplings between the Goldstone bosons and the nucleon. Different approaches to deal with the center of mass correction in the relativistic quark potential model approach are discussed.

  7. Quantitative versus qualitative modeling: a complementary approach in ecosystem study.

    Science.gov (United States)

    Bondavalli, C; Favilla, S; Bodini, A

    2009-02-01

    Natural disturbance or human perturbation act upon ecosystems by changing some dynamical parameters of one or more species. Foreseeing these modifications is necessary before embarking on an intervention: predictions may help to assess management options and define hypothesis for interventions. Models become valuable tools for studying and making predictions only when they capture types of interactions and their magnitude. Quantitative models are more precise and specific about a system, but require a large effort in model construction. Because of this very often ecological systems remain only partially specified and one possible approach to their description and analysis comes from qualitative modelling. Qualitative models yield predictions as directions of change in species abundance but in complex systems these predictions are often ambiguous, being the result of opposite actions exerted on the same species by way of multiple pathways of interactions. Again, to avoid such ambiguities one needs to know the intensity of all links in the system. One way to make link magnitude explicit in a way that can be used in qualitative analysis is described in this paper and takes advantage of another type of ecosystem representation: ecological flow networks. These flow diagrams contain the structure, the relative position and the connections between the components of a system, and the quantity of matter flowing along every connection. In this paper it is shown how these ecological flow networks can be used to produce a quantitative model similar to the qualitative counterpart. Analyzed through the apparatus of loop analysis this quantitative model yields predictions that are by no means ambiguous, solving in an elegant way the basic problem of qualitative analysis. The approach adopted in this work is still preliminary and we must be careful in its application.

  8. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  9. Comparison of Joint Modeling Approaches Including Eulerian Sliding Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Lomov, I; Antoun, T; Vorobiev, O

    2009-12-16

    Accurate representation of discontinuities such as joints and faults is a key ingredient for high fidelity modeling of shock propagation in geologic media. The following study was done to improve treatment of discontinuities (joints) in the Eulerian hydrocode GEODYN (Lomov and Liu 2005). Lagrangian methods with conforming meshes and explicit inclusion of joints in the geologic model are well suited for such an analysis. Unfortunately, current meshing tools are unable to automatically generate adequate hexahedral meshes for large numbers of irregular polyhedra. Another concern is that joint stiffness in such explicit computations requires significantly reduced time steps, with negative implications for both the efficiency and quality of the numerical solution. An alternative approach is to use non-conforming meshes and embed joint information into regular computational elements. However, once slip displacement on the joints become comparable to the zone size, Lagrangian (even non-conforming) meshes could suffer from tangling and decreased time step problems. The use of non-conforming meshes in an Eulerian solver may alleviate these difficulties and provide a viable numerical approach for modeling the effects of faults on the dynamic response of geologic materials. We studied shock propagation in jointed/faulted media using a Lagrangian and two Eulerian approaches. To investigate the accuracy of this joint treatment the GEODYN calculations have been compared with results from the Lagrangian code GEODYN-L which uses an explicit treatment of joints via common plane contact. We explore two approaches to joint treatment in the code, one for joints with finite thickness and the other for tight joints. In all cases the sliding interfaces are tracked explicitly without homogenization or blending the joint and block response into an average response. In general, rock joints will introduce an increase in normal compliance in addition to a reduction in shear strength. In the

  10. A simple approach to modeling ductile failure.

    Energy Technology Data Exchange (ETDEWEB)

    Wellman, Gerald William

    2012-06-01

    Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.

  11. An approach for activity-based DEVS model specification

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2016-01-01

    activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...

  12. Advanced language modeling approaches, case study: Expert search

    NARCIS (Netherlands)

    Hiemstra, Djoerd

    2008-01-01

    This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the

  13. Multiscale approach to modeling intrinsic dissipation in solids

    Science.gov (United States)

    Kunal, K.; Aluru, N. R.

    2016-08-01

    In this paper, we develop a multiscale approach to model intrinsic dissipation under high frequency of vibrations in solids. For vibrations with a timescale comparable to the phonon relaxation time, the local phonon distribution deviates from the equilibrium distribution. We extend the quasiharmonic (QHM) method to describe the dynamics under such a condition. The local deviation from the equilibrium state is characterized using a nonequilibrium stress tensor. A constitutive relation for the time evolution of the stress component is obtained. We then parametrize the evolution equation using the QHM method and a stochastic sampling approach. The stress relaxation dynamics is obtained using mode Langevin dynamics. Methods to obtain the input variables for the Langevin dynamics are discussed. The proposed methodology is used to obtain the dissipation rate Edissip for different cases. Frequency and size effect on Edissip are studied. The results are compared with those obtained using nonequilibrium molecular dynamics (MD).

  14. Model predictive control approach for a CPAP-device

    Directory of Open Access Journals (Sweden)

    Scheel Mathias

    2017-09-01

    Full Text Available The obstructive sleep apnoea syndrome (OSAS is characterized by a collapse of the upper respiratory tract, resulting in a reduction of the blood oxygen- and an increase of the carbon dioxide (CO2 - concentration, which causes repeated sleep disruptions. The gold standard to treat the OSAS is the continuous positive airway pressure (CPAP therapy. The continuous pressure keeps the upper airway open and prevents the collapse of the upper respiratory tract and the pharynx. Most of the available CPAP-devices cannot maintain the pressure reference [1]. In this work a model predictive control approach is provided. This control approach has the possibility to include the patient’s breathing effort into the calculation of the control variable. Therefore a patient-individualized control strategy can be developed.

  15. Anomalous superconductivity in the tJ model; moment approach

    DEFF Research Database (Denmark)

    Sørensen, Mads Peter; Rodriguez-Nunez, J.J.

    1997-01-01

    By extending the moment approach of Nolting (Z, Phys, 225 (1972) 25) in the superconducting phase, we have constructed the one-particle spectral functions (diagonal and off-diagonal) for the tJ model in any dimensions. We propose that both the diagonal and the off-diagonal spectral functions...... Hartree shift which in the end result enlarges the bandwidth of the free carriers allowing us to take relative high values of J/t and allowing superconductivity to live in the T-c-rho phase diagram, in agreement with numerical calculations in a cluster, We have calculated the static spin susceptibility......, chi(T), and the specific heat, C-v(T), within the moment approach. We find that all the relevant physical quantities show the signature of superconductivity at T-c in the form of kinks (anomalous behavior) or jumps, for low density, in agreement with recent published literature, showing a generic...

  16. Endoscopic endonasal skull base approach for parasellar lesions: Initial experiences, results, efficacy, and complications

    Directory of Open Access Journals (Sweden)

    Shigetoshi Yano

    2014-01-01

    Full Text Available Background: Endoscopic surgery is suitable for the transsphenoidal approach; it is minimally invasive and provides a well-lit operative field. The endoscopic skull base approach through the large opening of the sphenoid sinus through both nostrils has extended the surgical indication for various skull base lesions. In this study, we describe the efficacy and complications associated with the endoscopic skull base approach for extra- or intradural parasellar lesions based on our experiences. Methods: Seventy-four cases were treated by an endoscopic skull base approach. The indications for these procedures included 55 anterior extended approaches, 10 clival approaches, and 9 cavernous approaches. The operations were performed through both the nostrils using a rigid endoscope. After tumor removal, the skull base was reconstructed by a multilayered method using a polyglactin acid (PGA sheet. Results: Gross total resection was achieved in 82% of pituitary adenomas, 68.8% of meningiomas, and 60% of craniopharyngiomas in anterior extended approach and in 83.3% of chordomas in clival approach, but only in 50% of the tumors in cavernous approach. Tumor consistency, adhesion, and/or extension were significant limitations. Visual function improvements were achieved in 37 of 41 (90.2% cases. Cerebrospinal fluid (CSF leakage (9.5%, infections (5.4%, neural injuries (4.1%, and vascular injuries (2.7% were the major complications. Conclusions: Our experiences show that the endoscopic skull base approach is a safe and effective procedure for various parasellar lesions. Selection of patients who are unlikely to develop complications seems to be an important factor for procedure efficacy and good outcome.

  17. Challenges and opportunities for integrating lake ecosystem modelling approaches

    Science.gov (United States)

    Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.

    2010-01-01

    A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative

  18. GPA: a statistical approach to prioritizing GWAS results by integrating pleiotropy and annotation.

    Science.gov (United States)

    Chung, Dongjun; Yang, Can; Li, Cong; Gelernter, Joel; Zhao, Hongyu

    2014-11-01

    Results from Genome-Wide Association Studies (GWAS) have shown that complex diseases are often affected by many genetic variants with small or moderate effects. Identifications of these risk variants remain a very challenging problem. There is a need to develop more powerful statistical methods to leverage available information to improve upon traditional approaches that focus on a single GWAS dataset without incorporating additional data. In this paper, we propose a novel statistical approach, GPA (Genetic analysis incorporating Pleiotropy and Annotation), to increase statistical power to identify risk variants through joint analysis of multiple GWAS data sets and annotation information because: (1) accumulating evidence suggests that different complex diseases share common risk bases, i.e., pleiotropy; and (2) functionally annotated variants have been consistently demonstrated to be enriched among GWAS hits. GPA can integrate multiple GWAS datasets and functional annotations to seek association signals, and it can also perform hypothesis testing to test the presence of pleiotropy and enrichment of functional annotation. Statistical inference of the model parameters and SNP ranking is achieved through an EM algorithm that can handle genome-wide markers efficiently. When we applied GPA to jointly analyze five psychiatric disorders with annotation information, not only did GPA identify many weak signals missed by the traditional single phenotype analysis, but it also revealed relationships in the genetic architecture of these disorders. Using our hypothesis testing framework, statistically significant pleiotropic effects were detected among these psychiatric disorders, and the markers annotated in the central nervous system genes and eQTLs from the Genotype-Tissue Expression (GTEx) database were significantly enriched. We also applied GPA to a bladder cancer GWAS data set with the ENCODE DNase-seq data from 125 cell lines. GPA was able to detect cell lines that are

  19. Second Quantization Approach to Stochastic Epidemic Models

    CERN Document Server

    Mondaini, Leonardo

    2015-01-01

    We show how the standard field theoretical language based on creation and annihilation operators may be used for a straightforward derivation of closed master equations describing the population dynamics of multivariate stochastic epidemic models. In order to do that, we introduce an SIR-inspired stochastic model for hepatitis C virus epidemic, from which we obtain the time evolution of the mean number of susceptible, infected, recovered and chronically infected individuals in a population whose total size is allowed to change.

  20. A model comparison approach shows stronger support for economic models of fertility decline.

    Science.gov (United States)

    Shenk, Mary K; Towner, Mary C; Kress, Howard C; Alam, Nurul

    2013-05-14

    The demographic transition is an ongoing global phenomenon in which high fertility and mortality rates are replaced by low fertility and mortality. Despite intense interest in the causes of the transition, especially with respect to decreasing fertility rates, the underlying mechanisms motivating it are still subject to much debate. The literature is crowded with competing theories, including causal models that emphasize (i) mortality and extrinsic risk, (ii) the economic costs and benefits of investing in self and children, and (iii) the cultural transmission of low-fertility social norms. Distinguishing between models, however, requires more comprehensive, better-controlled studies than have been published to date. We use detailed demographic data from recent fieldwork to determine which models produce the most robust explanation of the rapid, recent demographic transition in rural Bangladesh. To rigorously compare models, we use an evidence-based statistical approach using model selection techniques derived from likelihood theory. This approach allows us to quantify the relative evidence the data give to alternative models, even when model predictions are not mutually exclusive. Results indicate that fertility, measured as either total fertility or surviving children, is best explained by models emphasizing economic factors and related motivations for parental investment. Our results also suggest important synergies between models, implicating multiple causal pathways in the rapidity and degree of recent demographic transitions.

  1. "Dispersion modeling approaches for near road | Science ...

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of applications. For example, such models can be useful for evaluating the mitigation potential of roadside barriers in reducing near-road exposures and their associated adverse health effects. Two databases, a tracer field study and a wind tunnel study, provide measurements used in the development and/or validation of algorithms to simulate dispersion in the presence of noise barriers. The tracer field study was performed in Idaho Falls, ID, USA with a 6-m noise barrier and a finite line source in a variety of atmospheric conditions. The second study was performed in the meteorological wind tunnel at the US EPA and simulated line sources at different distances from a model noise barrier to capture the effect on emissions from individual lanes of traffic. In both cases, velocity and concentration measurements characterized the effect of the barrier on dispersion.This paper presents comparisons with the two datasets of the barrier algorithms implemented in two different dispersion models: US EPA’s R-LINE (a research dispersion modelling tool under development by the US EPA’s Office of Research and Development) and CERC’s ADMS model (ADMS-Urban). In R-LINE the physical features reveal

  2. Modeling healthcare authorization and claim submissions using the openEHR dual-model approach

    Directory of Open Access Journals (Sweden)

    Freire Sergio M

    2011-10-01

    Full Text Available Abstract Background The TISS standard is a set of mandatory forms and electronic messages for healthcare authorization and claim submissions among healthcare plans and providers in Brazil. It is not based on formal models as the new generation of health informatics standards suggests. The objective of this paper is to model the TISS in terms of the openEHR archetype-based approach and integrate it into a patient-centered EHR architecture. Methods Three approaches were adopted to model TISS. In the first approach, a set of archetypes was designed using ENTRY subclasses. In the second one, a set of archetypes was designed using exclusively ADMIN_ENTRY and CLUSTERs as their root classes. In the third approach, the openEHR ADMIN_ENTRY is extended with classes designed for authorization and claim submissions, and an ISM_TRANSITION attribute is added to the COMPOSITION class. Another set of archetypes was designed based on this model. For all three approaches, templates were designed to represent the TISS forms. Results The archetypes based on the openEHR RM (Reference Model can represent all TISS data structures. The extended model adds subclasses and an attribute to the COMPOSITION class to represent information on authorization and claim submissions. The archetypes based on all three approaches have similar structures, although rooted in different classes. The extended openEHR RM model is more semantically aligned with the concepts involved in a claim submission, but may disrupt interoperability with other systems and the current tools must be adapted to deal with it. Conclusions Modeling the TISS standard by means of the openEHR approach makes it aligned with ISO recommendations and provides a solid foundation on which the TISS can evolve. Although there are few administrative archetypes available, the openEHR RM is expressive enough to represent the TISS standard. This paper focuses on the TISS but its results may be extended to other billing

  3. Lightweight approach to model traceability in a CASE tool

    Science.gov (United States)

    Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita

    2017-07-01

    A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.

  4. Approaching models of nursing from a postmodernist perspective.

    Science.gov (United States)

    Lister, P

    1991-02-01

    This paper explores some questions about the use of models of nursing. These questions make various assumptions about the nature of models of nursing, in general and in particular. Underlying these assumptions are various philosophical positions which are explored through an introduction to postmodernist approaches in philosophical criticism. To illustrate these approaches, a critique of the Roper et al. model is developed, and more general attitudes towards models of nursing are examined. It is suggested that postmodernism offers a challenge to many of the assumptions implicit in models of nursing, and that a greater awareness of these assumptions should lead to nursing care being better informed where such models are in use.

  5. DRUG ADDICTION SOCIAL COST IN RUSSIA REGIONS: METHODICAL APPROACH AND ESTIMATION RESULTS

    Directory of Open Access Journals (Sweden)

    A.V. Kalina

    2007-06-01

    Full Text Available The methodical approach to drug addiction social cost estimation in Russian regions is suggested in the article. It is presented by cost estimation of socio-economical consequences of drug addiction spread on the territory. The main approaches to latency characteristics of drug addiction situation estimation are shown. The results of drug addiction and its separate parts social cost estimation are given for federal regions and subjects of Russian Federation for the period of 2001 − 2005.

  6. A Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies.

    Science.gov (United States)

    Qian, Song S; Craig, J Kevin; Baustian, Melissa M; Rabalais, Nancy N

    2009-12-01

    We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a "treatment" effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems.

  7. Manufacturing Excellence Approach to Business Performance Model

    Directory of Open Access Journals (Sweden)

    Jesus Cruz Alvarez

    2015-03-01

    Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.

  8. Effective Model Approach to the Dense State of QCD Matter

    CERN Document Server

    Fukushima, Kenji

    2010-01-01

    The first-principle approach to the dense state of QCD matter, i.e. the lattice-QCD simulation at finite baryon density, is not under theoretical control for the moment. The effective model study based on QCD symmetries is a practical alternative. However the model parameters that are fixed by hadronic properties in the vacuum may have unknown dependence on the baryon chemical potential. We propose a new prescription to constrain the effective model parameters by the matching condition with the thermal Statistical Model. In the transitional region where thermal quantities blow up in the Statistical Model, deconfined quarks and gluons should smoothly take over the relevant degrees of freedom from hadrons and resonances. We use the Polyakov-loop coupled Nambu--Jona-Lasinio (PNJL) model as an effective description in the quark side and show how the matching condition is satisfied by a simple ansatz on the Polyakov loop potential. Our results favor a phase diagram with the chiral phase transition located at sligh...

  9. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced....... However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties...

  10. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble......’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent...

  11. Mathematical Existence Results for the Doi-Edwards Polymer Model

    Science.gov (United States)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  12. New Approaches in Reusable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  13. New Approaches in Reuseable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    Zapata, Edgar

    2013-01-01

    This paper presents the results of a 2012 life cycle cost (LCC) study of hybrid Reusable Booster Systems (RBS) conducted by NASA Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC analysis, building on past work where applicable, but emphasizing the integration of new approaches in life cycle cost estimation. Specifically, the inclusion of industry processes/practices and indirect costs were a new and significant part of the analysis. The focus of LCC estimation has traditionally been from the perspective of technology, design characteristics, and related factors such as reliability. Technology has informed the cost related support to decision makers interested in risk and budget insight. This traditional emphasis on technology occurs even though it is well established that complex aerospace systems costs are mostly about indirect costs, with likely only partial influence in these indirect costs being due to the more visible technology products. Organizational considerations, processes/practices, and indirect costs are traditionally derived ("wrapped") only by relationship to tangible product characteristics. This traditional approach works well as long as it is understood that no significant changes, and by relation no significant improvements, are being pursued in the area of either the government acquisition or industry?s indirect costs. In this sense then, most launch systems cost models ignore most costs. The alternative was implemented in this LCC study, whereby the approach considered technology and process/practices in balance, with as much detail for one as the other. This RBS LCC study has avoided point-designs, for now, instead emphasizing exploring the trade-space of potential technology advances joined with potential process/practice advances. Given the range of decisions, and all their combinations, it was necessary to create a model of the original model

  14. MDA based-approach for UML Models Complete Comparison

    CERN Document Server

    Chaouni, Samia Benabdellah; Mouline, Salma

    2011-01-01

    If a modeling task is distributed, it will frequently be necessary to integrate models developed by different team members. Problems occur in the models integration step and particularly, in the comparison phase of the integration. This issue had been discussed in several domains and various models. However, previous approaches have not correctly handled the semantic comparison. In the current paper, we provide a MDA-based approach for models comparison which aims at comparing UML models. We develop an hybrid approach which takes into account syntactic, semantic and structural comparison aspects. For this purpose, we use the domain ontology as well as other resources such as dictionaries. We propose a decision support system which permits the user to validate (or not) correspondences extracted in the comparison phase. For implementation, we propose an extension of the generic correspondence metamodel AMW in order to transform UML models to the correspondence model.

  15. The CONRAD approach to biokinetic modeling of DTPA decorporation therapy.

    Science.gov (United States)

    Breustedt, Bastian; Blanchardon, Eric; Bérard, Philippe; Fritsch, Paul; Giussani, Augusto; Lopez, Maria Antonia; Luciani, Andrea; Nosske, Dietmar; Piechowski, Jean; Schimmelpfeng, Jutta; Sérandour, Anne-Laure

    2010-10-01

    Diethylene Triamine Pentaacetic Acid (DTPA) is used for decorporation of plutonium because it is known to be able to enhance its urinary excretion for several days after treatment by forming stable Pu-DTPA complexes. The decorporation prevents accumulation in organs and results in a dosimetric benefit, which is difficult to quantify from bioassay data using existing models. The development of a biokinetic model describing the mechanisms of actinide decorporation by administration of DTPA was initiated as a task in the European COordinated Network on RAdiation Dosimetry (CONRAD). The systemic biokinetic model from Leggett et al. and the biokinetic model for DTPA compounds of International Commission on Radiological Protection Publication 53 were the starting points. A new model for biokinetics of administered DTPA based on physiological interpretation of 14C-labeled DTPA studies from literature was proposed by the group. Plutonium and DTPA biokinetics were modeled separately. The systems were connected by means of a second order kinetics process describing the chelation process of plutonium atoms and DTPA molecules to Pu-DTPA complexes. It was assumed that chelation only occurs in the blood and in systemic compartment ST0 (representing rapid turnover soft tissues), and that Pu-DTPA complexes and administered forms of DTPA share the same biokinetic behavior. First applications of the CONRAD approach showed that the enhancement of plutonium urinary excretion after administration of DTPA was strongly influenced by the chelation rate constant. Setting it to a high value resulted in a good fit to the observed data. However, the model was not yet satisfactory since the effects of repeated DTPA administration in a short time period cannot be predicted in a realistic way. In order to introduce more physiological knowledge into the model several questions still have to be answered. Further detailed studies of human contamination cases and experimental data will be needed in

  16. Modeling Results for the ITER Cryogenic Fore Pump

    Science.gov (United States)

    Zhang, Dongsheng

    The work presented here is the analysis and modeling of the ITER-Cryogenic Fore Pump (CFP), also called Cryogenic Viscous Compressor (CVC). Unlike common cryopumps that are usually used to create and maintain vacuum, the cryogenic fore pump is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of the torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with the standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum, and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition of the gas fluid mixture. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. This document presents three numerical models: a transient model, a steady state model, and a hemisphere (or molecular flow) model. The first two models are developed based on analysis of the raw experimental data while the third model is developed as a preliminary study. The modeling results are compared with available experiment data for verification. The models can be used for cryopump design, and can also benefit problems, such as loss of vacuum in a cryomodule or cryogenic desublimation. The scientific and engineering investigation being done here builds connections between Mechanical Engineering and other disciplines, such as Chemical Engineering, Physics, and Chemistry.

  17. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  18. Comparison of NASCAP modelling results with lumped circuit analysis

    Science.gov (United States)

    Stang, D. B.; Purvis, C. K.

    1980-01-01

    Engineering design tools that can be used to predict the development of absolute and differential potentials by realistic spacecraft under geomagnetic substorm conditions are described. Two types of analyses are in use: (1) the NASCAP code, which computes quasistatic charging of geometrically complex objects with multiple surface materials in three dimensions; (2) lumped element equivalent circuit models that are used for analyses of particular spacecraft. The equivalent circuit models require very little computation time, however, they cannot account for effects, such as the formation of potential barriers, that are inherently multidimensional. Steady state potentials of structure and insulation are compared with those resulting from the equivalent circuit model.

  19. The East model: recent results and new progresses

    CERN Document Server

    Faggionato, Alessandra; Roberto, Cyril; Toninelli, Cristina

    2012-01-01

    The East model is a particular one dimensional interacting particle system in which certain transitions are forbidden according to some constraints depending on the configuration of the system. As such it has received particular attention in the physics literature as a special case of a more general class of systems referred to as kinetically constrained models, which play a key role in explaining some features of the dynamics of glasses. In this paper we give an extensive overview of recent rigorous results concerning the equilibrium and non-equilibrium dynamics of the East model together with some new improvements.

  20. Constraining hybrid inflation models with WMAP three-year results

    CERN Document Server

    Cardoso, A

    2006-01-01

    We reconsider the original model of quadratic hybrid inflation in light of the WMAP three-year results and study the possibility of obtaining a spectral index of primordial density perturbations, $n_s$, smaller than one from this model. The original hybrid inflation model naturally predicts $n_s\\geq1$ in the false vacuum dominated regime but it is also possible to have $n_s<1$ when the quadratic term dominates. We therefore investigate whether there is also an intermediate regime compatible with the latest constraints, where the scalar field value during the last 50 e-folds of inflation is less than the Planck scale.

  1. Mixture modeling approach to flow cytometry data.

    Science.gov (United States)

    Boedigheimer, Michael J; Ferbas, John

    2008-05-01

    Flow Cytometry has become a mainstay technique for measuring fluorescent and physical attributes of single cells in a suspended mixture. These data are reduced during analysis using a manual or semiautomated process of gating. Despite the need to gate data for traditional analyses, it is well recognized that analyst-to-analyst variability can impact the dataset. Moreover, cells of interest can be inadvertently excluded from the gate, and relationships between collected variables may go unappreciated because they were not included in the original analysis plan. A multivariate non-gating technique was developed and implemented that accomplished the same goal as traditional gating while eliminating many weaknesses. The procedure was validated against traditional gating for analysis of circulating B cells in normal donors (n = 20) and persons with Systemic Lupus Erythematosus (n = 42). The method recapitulated relationships in the dataset while providing for an automated and objective assessment of the data. Flow cytometry analyses are amenable to automated analytical techniques that are not predicated on discrete operator-generated gates. Such alternative approaches can remove subjectivity in data analysis, improve efficiency and may ultimately enable construction of large bioinformatics data systems for more sophisticated approaches to hypothesis testing.

  2. Recent MEG Results and Predictive SO(10) Models

    CERN Document Server

    Fukuyama, Takeshi

    2011-01-01

    Recent MEG results of a search for the lepton flavor violating (LFV) muon decay, $\\mu \\to e \\gamma$, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidence/discovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been well-known, supersymmetric (SUSY) models can generate the $\\mu \\to e \\gamma$ decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory (GUT) models such as the minimal SUSY SO(10) model (we call this class of models "predictive SO(10) models") can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the $\\mu \\to e \\gamma$ process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO(10) models. If we ...

  3. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    Science.gov (United States)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial

  4. Predicting future glacial lakes in Austria using different modelling approaches

    Science.gov (United States)

    Otto, Jan-Christoph; Helfricht, Kay; Prasicek, Günther; Buckel, Johannes; Keuschnig, Markus

    2017-04-01

    Glacier retreat is one of the most apparent consequences of temperature rise in the 20th and 21th centuries in the European Alps. In Austria, more than 240 new lakes have formed in glacier forefields since the Little Ice Age. A similar signal is reported from many mountain areas worldwide. Glacial lakes can constitute important environmental and socio-economic impacts on high mountain systems including water resource management, sediment delivery, natural hazards, energy production and tourism. Their development significantly modifies the landscape configuration and visual appearance of high mountain areas. Knowledge on the location, number and extent of these future lakes can be used to assess potential impacts on high mountain geo-ecosystems and upland-lowland interactions. Information on new lakes is critical to appraise emerging threads and potentials for society. The recent development of regional ice thickness models and their combination with high resolution glacier surface data allows predicting the topography below current glaciers by subtracting ice thickness from glacier surface. Analyzing these modelled glacier bed surfaces reveals overdeepenings that represent potential locations for future lakes. In order to predict the location of future glacial lakes below recent glaciers in the Austrian Alps we apply different ice thickness models using high resolution terrain data and glacier outlines. The results are compared and validated with ice thickness data from geophysical surveys. Additionally, we run the models on three different glacier extents provided by the Austrian Glacier Inventories from 1969, 1998 and 2006. Results of this historical glacier extent modelling are compared to existing glacier lakes and discussed focusing on geomorphological impacts on lake evolution. We discuss model performance and observed differences in the results in order to assess the approach for a realistic prediction of future lake locations. The presentation delivers

  5. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    National Research Council Canada - National Science Library

    Wicaksono, Achmad Arief; Syarief, Rizal; Suparno, Ono

    2017-01-01

    .... This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development...

  6. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  7. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  8. Standard Model physics results from ATLAS and CMS

    CERN Document Server

    Dordevic, Milos

    2015-01-01

    The most recent results of Standard Model physics studies in proton-proton collisions at 7 TeV and 8 TeV center-of-mass energy based on data recorded by ATLAS and CMS detectors during the LHC Run I are reviewed. This overview includes studies of vector boson production cross section and properties, results on V+jets production with light and heavy flavours, latest VBS and VBF results, measurement of diboson production with an emphasis on ATGC and QTGC searches, as well as results on inclusive jet cross sections with strong coupling constant measurement and PDF constraints. The outlined results are compared to the prediction of the Standard Model.

  9. "Dispersion modeling approaches for near road

    Science.gov (United States)

    Roadway design and roadside barriers can have significant effects on the dispersion of traffic-generated pollutants, especially in the near-road environment. Dispersion models that can accurately simulate these effects are needed to fully assess these impacts for a variety of app...

  10. System Behavior Models: A Survey of Approaches

    Science.gov (United States)

    2016-06-01

    Mandana Vaziri, and Frank Tip. 2007. “Finding Bugs Efficiently with a SAT Solver.” In European Software Engineering Conference and the ACM SIGSOFT...Van Gorp. 2005. “A Taxonomy of Model Transformation.” Electronic Notes in Theoretical Computer Science 152: 125–142. Miyazawa, Alvaro, and Ana

  11. A new procedure to built a model covariance matrix: first results

    Science.gov (United States)

    Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.

    2012-04-01

    In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special

  12. Biogas Production Modelling: A Control System Engineering Approach

    Science.gov (United States)

    Stollenwerk, D.; Rieke, C.; Dahmen, M.; Pieper, M.

    2016-03-01

    Due to the Renewable Energy Act, in Germany it is planned to increase the amount of renewable energy carriers up to 60%. One of the main problems is the fluctuating supply of wind and solar energy. Here biogas plants provide a solution, because a demand-driven supply is possible. Before running such a plant, it is necessary to simulate and optimize the process feeding strategy. Current simulation models are either very detailed like the ADM 1, which leads to very long optimization runtimes or not accurate enough to handle the biogas production kinetics. Therefore this paper provides a new model of a biogas plant, which is easy to parametrize but also has the needed accuracy for the output prediction. It is based on the control system approach of system identification and validated with laboratory results of a real biogas production testing facility.

  13. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  14. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  15. New modeling approach for bounding flight in birds.

    Science.gov (United States)

    Sachs, Gottfried; Lenz, Jakob

    2011-12-01

    A new modeling approach is presented which accounts for the unsteady motion features and dynamics characteristics of bounding flight. For this purpose, a realistic mathematical model is developed to describe the flight dynamics of a bird with regard to a motion which comprises flapping and bound phases involving acceleration and deceleration as well as, simultaneously, pull-up and push-down maneuvers. Furthermore, a mathematical optimization method is used for determining that bounding flight mode which yields the minimum energy expenditure per range. Thus, it can be shown to what extent bounding flight is aerodynamically superior to continuous flapping flight, yielding a reduction in the energy expenditure in the speed range practically above the maximum range speed. Moreover, the role of the body lift for the efficiency of bounding flight is identified and quantified. Introducing an appropriate non-dimensionalization of the relations describing the bird's flight dynamics, results of generally valid nature are derived for the addressed items.

  16. Right approach to 3D modeling using CAD tools

    Science.gov (United States)

    Baddam, Mounica Reddy

    The thesis provides a step-by-step methodology to enable an instructor dealing with CAD tools to optimally guide his/her students through an understandable 3D modeling approach which will not only enhance their knowledge about the tool's usage but also enable them to achieve their desired result in comparatively lesser time. In the known practical field, there is particularly very little information available to apply CAD skills to formal beginners' training sessions. Additionally, advent of new software in 3D domain cumulates updating into a more difficult task. Keeping up to the industry's advanced requirements emphasizes the importance of more skilled hands in the field of CAD development, rather than just prioritizing manufacturing in terms of complex software features. The thesis analyses different 3D modeling approaches specified to the varieties of CAD tools currently available in the market. Utilizing performance-time databases, learning curves have been generated to measure their performance time, feature count etc. Based on the results, improvement parameters have also been provided for (Asperl, 2005).

  17. A Modeling Approach for Plastic-Metal Laser Direct Joining

    Science.gov (United States)

    Lutey, Adrian H. A.; Fortunato, Alessandro; Ascari, Alessandro; Romoli, Luca

    2017-09-01

    Laser processing has been identified as a feasible approach to direct joining of metal and plastic components without the need for adhesives or mechanical fasteners. The present work sees development of a modeling approach for conduction and transmission laser direct joining of these materials based on multi-layer optical propagation theory and numerical heat flow simulation. The scope of this methodology is to predict process outcomes based on the calculated joint interface and upper surface temperatures. Three representative cases are considered for model verification, including conduction joining of PBT and aluminum alloy, transmission joining of optically transparent PET and stainless steel, and transmission joining of semi-transparent PA 66 and stainless steel. Conduction direct laser joining experiments are performed on black PBT and 6082 anticorodal aluminum alloy, achieving shear loads of over 2000 N with specimens of 2 mm thickness and 25 mm width. Comparison with simulation results shows that consistently high strength is achieved where the peak interface temperature is above the plastic degradation temperature. Comparison of transmission joining simulations and published experimental results confirms these findings and highlights the influence of plastic layer optical absorption on process feasibility.

  18. A new approach for estimating the efficiencies of the nucleotide substitution models.

    Science.gov (United States)

    Som, Anup

    2007-04-01

    In this article, a new approach is presented for estimating the efficiencies of the nucleotide substitution models in a four-taxon case and then this approach is used to estimate the relative efficiencies of six substitution models under a wide variety of conditions. In this approach, efficiencies of the models are estimated by using a simple probability distribution theory. To assess the accuracy of the new approach, efficiencies of the models are also estimated by using the direct estimation method. Simulation results from the direct estimation method confirmed that the new approach is highly accurate. The success of the new approach opens a unique opportunity to develop analytical methods for estimating the relative efficiencies of the substitution models in a straightforward way.

  19. Modelling Based Approach for Reconstructing Evidence of VOIP Malicious Attacks

    Directory of Open Access Journals (Sweden)

    Mohammed Ibrahim

    2015-05-01

    Full Text Available Voice over Internet Protocol (VoIP is a new communication technology that uses internet protocol in providing phone services. VoIP provides various forms of benefits such as low monthly fee and cheaper rate in terms of long distance and international calls. However, VoIP is accompanied with novel security threats. Criminals often take advantages of such security threats and commit illicit activities. These activities require digital forensic experts to acquire, analyses, reconstruct and provide digital evidence. Meanwhile, there are various methodologies and models proposed in detecting, analysing and providing digital evidence in VoIP forensic. However, at the time of writing this paper, there is no model formalized for the reconstruction of VoIP malicious attacks. Reconstruction of attack scenario is an important technique in exposing the unknown criminal acts. Hence, this paper will strive in addressing that gap. We propose a model for reconstructing VoIP malicious attacks. To achieve that, a formal logic approach called Secure Temporal Logic of Action(S-TLA+ was adopted in rebuilding the attack scenario. The expected result of this model is to generate additional related evidences and their consistency with the existing evidences can be determined by means of S-TLA+ model checker.

  20. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  1. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  2. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod

  3. A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING

    NARCIS (Netherlands)

    ANTOULAS, AC; WILLEMS, JC

    1993-01-01

    The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both re

  4. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  5. A market model for stochastic smile: a conditional density approach

    NARCIS (Netherlands)

    Zilber, A.

    2005-01-01

    The purpose of this paper is to introduce a new approach that allows to construct no-arbitrage market models of for implied volatility surfaces (in other words, stochastic smile models). That is to say, the idea presented here allows us to model prices of liquidly traded vanilla options as separate

  6. Exact results for car accidents in a traffic model

    Science.gov (United States)

    Huang, Ding-wei

    1998-07-01

    Within the framework of a recent model for car accidents on single-lane highway traffic, we study analytically the probability of the occurrence of car accidents. Exact results are obtained. Various scaling behaviours are observed. The linear dependence of the occurrence of car accidents on density is understood as the dominance of a single velocity in the distribution.

  7. Thermoplasmonics modeling: A Green's function approach

    Science.gov (United States)

    Baffou, Guillaume; Quidant, Romain; Girard, Christian

    2010-10-01

    We extend the discrete dipole approximation (DDA) and the Green’s dyadic tensor (GDT) methods—previously dedicated to all-optical simulations—to investigate the thermodynamics of illuminated plasmonic nanostructures. This extension is based on the use of the thermal Green’s function and a original algorithm that we named Laplace matrix inversion. It allows for the computation of the steady-state temperature distribution throughout plasmonic systems. This hybrid photothermal numerical method is suited to investigate arbitrarily complex structures. It can take into account the presence of a dielectric planar substrate and is simple to implement in any DDA or GDT code. Using this numerical framework, different applications are discussed such as thermal collective effects in nanoparticles assembly, the influence of a substrate on the temperature distribution and the heat generation in a plasmonic nanoantenna. This numerical approach appears particularly suited for new applications in physics, chemistry, and biology such as plasmon-induced nanochemistry and catalysis, nanofluidics, photothermal cancer therapy, or phase-transition control at the nanoscale.

  8. Coupling approaches used in atmospheric entry models

    Science.gov (United States)

    Gritsevich, M. I.

    2012-09-01

    While a planet orbits the Sun, it is subject to impact by smaller objects, ranging from tiny dust particles and space debris to much larger asteroids and comets. Such collisions have taken place frequently over geological time and played an important role in the evolution of planets and the development of life on the Earth. Though the search for near-Earth objects addresses one of the main points of the Asteroid and Comet Hazard, one should not underestimate the useful information to be gleaned from smaller atmospheric encounters, known as meteors or fireballs. Not only do these events help determine the linkages between meteorites and their parent bodies; due to their relative regularity they provide a good statistical basis for analysis. For successful cases with found meteorites, the detailed atmospheric path record is an excellent tool to test and improve existing entry models assuring the robustness of their implementation. There are many more important scientific questions meteoroids help us to answer, among them: Where do these objects come from, what are their origins, physical properties and chemical composition? What are the shapes and bulk densities of the space objects which fully ablate in an atmosphere and do not reach the planetary surface? Which values are directly measured and which are initially assumed as input to various models? How to couple both fragmentation and ablation effects in the model, taking real size distribution of fragments into account? How to specify and speed up the recovery of a recently fallen meteorites, not letting weathering to affect samples too much? How big is the pre-atmospheric projectile to terminal body ratio in terms of their mass/volume? Which exact parameters beside initial mass define this ratio? More generally, how entering object affects Earth's atmosphere and (if applicable) Earth's surface? How to predict these impact consequences based on atmospheric trajectory data? How to describe atmospheric entry

  9. Regression mixture models : Does modeling the covariance between independent variables and latent classes improve the results?

    NARCIS (Netherlands)

    Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.

    2016-01-01

    Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the

  10. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  11. Bayesian Approach to Neuro-Rough Models for Modelling HIV

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.

  12. Implicit moral evaluations: A multinomial modeling approach.

    Science.gov (United States)

    Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael

    2017-01-01

    Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Continuous Molecular Fields Approach Applied to Structure-Activity Modeling

    CERN Document Server

    Baskin, Igor I

    2013-01-01

    The Method of Continuous Molecular Fields is a universal approach to predict various properties of chemical compounds, in which molecules are represented by means of continuous fields (such as electrostatic, steric, electron density functions, etc). The essence of the proposed approach consists in performing statistical analysis of functional molecular data by means of joint application of kernel machine learning methods and special kernels which compare molecules by computing overlap integrals of their molecular fields. This approach is an alternative to traditional methods of building 3D structure-activity and structure-property models based on the use of fixed sets of molecular descriptors. The methodology of the approach is described in this chapter, followed by its application to building regression 3D-QSAR models and conducting virtual screening based on one-class classification models. The main directions of the further development of this approach are outlined at the end of the chapter.

  14. Long-term results of endosurgical and open surgical approach for Zenker diverticulum

    Institute of Scientific and Technical Information of China (English)

    Luigi Bonavina; Davide Bona; Medhanie Abraham; Greta Saino; Emmanuele Abate

    2007-01-01

    AIM: To assess the effectiveness of minimally invasive versus traditional open surgical approach in the treatment of Zenker diverticulum.METHODS: Between 1976 and 2006, 297 patients underwent transoral stapling (n = 181) or stapled diverticulectomy and cricopharyngeal myotomy (n = 116). Subjective and objective evaluations of the outcome of the two procedures were made at 1 and 6 mo after operation, and then every year. Long-term follow-up data were available for a subgroup of patients at a minimum of 5 and 10 years.RESULTS: The operative time and hospital stay were markedly reduced in patients undergoing the endosurgical approach. Overall, 92% of patients undergoing the endosurgical approach and 94% of those undergoing the open approach were symptom-free or were significantly improved after a median follow-up of 27 and 48 mo, respectively. At a minimum follow-up of 5 and 10 years, most patients were asymptomatic after both procedures, except for those individuals undergoing an endosurgical procedure for a small diverticulum (< 3cm).CONCLUSION: Both operations relieve the outflow obstruction at the pharyngoesophageal junction,indicating that cricopharyngeal myotomy has an important therapeutic role in this disease independent of the resection of the pouch and of the surgical approach.Diverticula smaller than 3 cm represent a formal contraindication to the endosurgical approach because the common wall is too short to accommodate one cartridge of staples and to allow complete division of the sphincter.

  15. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  16. Assessment of Galileo modal test results for mathematical model verification

    Science.gov (United States)

    Trubert, M.

    1984-01-01

    The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.

  17. A forward modeling approach for interpreting impeller flow logs.

    Science.gov (United States)

    Parker, Alison H; West, L Jared; Odling, Noelle E; Bown, Richard T

    2010-01-01

    A rigorous and practical approach for interpretation of impeller flow log data to determine vertical variations in hydraulic conductivity is presented and applied to two well logs from a Chalk aquifer in England. Impeller flow logging involves measuring vertical flow speed in a pumped well and using changes in flow with depth to infer the locations and magnitudes of inflows into the well. However, the measured flow logs are typically noisy, which leads to spurious hydraulic conductivity values where simplistic interpretation approaches are applied. In this study, a new method for interpretation is presented, which first defines a series of physical models for hydraulic conductivity variation with depth and then fits the models to the data, using a regression technique. Some of the models will be rejected as they are physically unrealistic. The best model is then selected from the remaining models using a maximum likelihood approach. This balances model complexity against fit, for example, using Akaike's Information Criterion.

  18. Extended endoscopic endonasal transsphenoidal approach for retrochiasmatic craniopharyngioma: Surgical technique and results

    Directory of Open Access Journals (Sweden)

    Suresh K Sankhla

    2015-01-01

    Full Text Available Objective: Surgical treatment of retrochiasmatic craniopharyngioma still remains a challenge. While complete removal of the tumor with preservation of the vital neurovascular structures is often the goal of the treatment, there is no optimal surgical approach available to achieve this goal. Transcranial and transsphenoidal microsurgical approaches, commonly used in the past, have considerable technical limitations. The extended endonasal endoscopic surgical route, obtained by removal of tuberculum sellae and planum sphenoidale, offers direct midline access to the retrochiasmatic space and provides excellent visualization of the undersurface of the optic chiasm. In this report, we describe the technical details of the extended endoscopic approach, and review our results using this approach in the surgical management of retrochiasmatic craniopharyngiomas. Methods: Fifteen children, including 9 girls and 6 boys, aged 8 to 15 years underwent surgery using extended endoscopic transsphenoidal approach between 2008 and 2014. Nine patients had a surgical procedure done previously and presented with recurrence of symptoms and regrowth of their residual tumors. Results: A gross total or near total excision was achieved in 10 (66.7% patients, subtotal resection in 4 (26.7%, and partial removal in 1 (6.7% patient. Postoperatively, headache improved in 93.3%, vision recovered in 77.3%, and the hormonal levels stabilised in 66.6%. Three patients (20% developed postoperative CSF leaks which were managed conservatively. Three (20% patients with diabetes insipidus and 2 (13.3% with panhypopituitarism required long-term hormonal replacement therapy. Conclusions: Our early experience suggests that the extended endonasal endoscopic approach is a reasonable option for removal of the retrochiasmal craniopharyngiomas. Compared to other surgical approaches, it provides better opportunities for greater tumor removal and visual improvement without any increase in risks.

  19. Affect of surgical approaches on functional results of total hip arthroplasty in early postoperative period

    Directory of Open Access Journals (Sweden)

    D. V. Andreyev

    2013-01-01

    Full Text Available Minimally invasive approaches implies a less soft tissue damage and, therefore, more rapid recovery of the patient in the early postoperative period. The present study is a comparison of minimally invasive and standard approaches using biomechanical analysis of standing and walking patients before and after total hip arthroplasty, as well as an analysis of clinical outcomes in the early postoperative period. Fifty patients undergoing primary total hip arthroplasty using a minimally invasive and conventional techniques were divided into three groups. The first group consisted of patients operated on using the MIS AL (modified minimally invasive approach Watson-Jones (n = 17, the second - MDM (minimally invasive approach to the modified Mueller (n = 16 and in the third - with the use of transgluteal conventional approach by Harding (n = 17. The estimation of biomechanical parameters in static and dynamic patients before surgery and at 8-10 days after surgery. Also assessed clinical outcome postoperative visual analogue scale (VAS and Harris scale on day 10, 6 and 12 weeks and 1 year. When comparing the three groups of patients stabilometry best results were observed in groups of minimally invasive approaches MIS AL and MDM. When comparing the three groups significantly better (a moderate increase in the duration of the step, rolling the contralateral limb and a slight increase in the duration of the step the operated limb by increasing the duration of the roll-over were identified in the minimally invasive group MIS AL and MDM. In assessing the scale of Harris in the early postoperative period, higher rates were observed in groups of minimally invasive approaches. A year after the operation functional results become similar in all groups.

  20. A Networks Approach to Modeling Enzymatic Reactions.

    Science.gov (United States)

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.

  1. Modeling Approaches for Describing Microbial Population Heterogeneity

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita

    , ethanol and biomass throughout the reactor. This work has proven that the integration of CFD and population balance models, for describing the growth of a microbial population in a spatially heterogeneous reactor, is feasible, and that valuable insight on the interplay between flow and the dynamics......Although microbial populations are typically described by averaged properties, individual cells present a certain degree of variability. Indeed, initially clonal microbial populations develop into heterogeneous populations, even when growing in a homogeneous environment. A heterogeneous microbial......) to predict distributions of certain population properties including particle size, mass or volume, and molecular weight. Similarly, PBM allow for a mathematical description of distributed cell properties within microbial populations. Cell total protein content distributions (a measure of cell mass) have been...

  2. Hamiltonian approach to hybrid plasma models

    CERN Document Server

    Tronci, Cesare

    2010-01-01

    The Hamiltonian structures of several hybrid kinetic-fluid models are identified explicitly, upon considering collisionless Vlasov dynamics for the hot particles interacting with a bulk fluid. After presenting different pressure-coupling schemes for an ordinary fluid interacting with a hot gas, the paper extends the treatment to account for a fluid plasma interacting with an energetic ion species. Both current-coupling and pressure-coupling MHD schemes are treated extensively. In particular, pressure-coupling schemes are shown to require a transport-like term in the Vlasov kinetic equation, in order for the Hamiltonian structure to be preserved. The last part of the paper is devoted to studying the more general case of an energetic ion species interacting with a neutralizing electron background (hybrid Hall-MHD). Circulation laws and Casimir functionals are presented explicitly in each case.

  3. Modeling vertical loads in pools resulting from fluid injection. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-06-15

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.

  4. Modelling approach for gravity dam break analysis

    Directory of Open Access Journals (Sweden)

    Boussekine Mourad

    2016-09-01

    Full Text Available The construction of dams in rivers can provide considerable benefits such as the supply of drinking and irrigation water; however the consequences which would result in the event of their failure could be catastrophic. They vary dramatically depending on the extent of the inundation area, the size of the population at risk.

  5. A Dynamic Linear Modeling Approach to Public Policy Change

    DEFF Research Database (Denmark)

    Loftis, Matthew; Mortensen, Peter Bjerre

    2017-01-01

    Theories of public policy change, despite their differences, converge on one point of strong agreement. The relationship between policy and its causes can and does change over time. This consensus yields numerous empirical implications, but our standard analytical tools are inadequate for testing...... them. As a result, the dynamic and transformative relationships predicted by policy theories have been left largely unexplored in time-series analysis of public policy. This paper introduces dynamic linear modeling (DLM) as a useful statistical tool for exploring time-varying relationships in public...... policy. The paper offers a detailed exposition of the DLM approach and illustrates its usefulness with a time series analysis of U.S. defense policy from 1957-2010. The results point the way for a new attention to dynamics in the policy process and the paper concludes with a discussion of how...

  6. Modeling of phase equilibria with CPA using the homomorph approach

    DEFF Research Database (Denmark)

    Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios

    2011-01-01

    For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...

  7. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  8. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  9. Modular Modelling and Simulation Approach - Applied to Refrigeration Systems

    DEFF Research Database (Denmark)

    Sørensen, Kresten Kjær; Stoustrup, Jakob

    2008-01-01

    This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...

  10. Pattern-based approach for logical traffic isolation forensic modelling

    CSIR Research Space (South Africa)

    Dlamini, I

    2009-08-01

    Full Text Available The use of design patterns usually changes the approach of software design and makes software development relatively easy. This paper extends work on a forensic model for Logical Traffic Isolation (LTI) based on Differentiated Services (Diff...

  11. A semantic-web approach for modeling computing infrastructures

    NARCIS (Netherlands)

    M. Ghijsen; J. van der Ham; P. Grosso; C. Dumitru; H. Zhu; Z. Zhao; C. de Laat

    2013-01-01

    This paper describes our approach to modeling computing infrastructures. Our main contribution is the Infrastructure and Network Description Language (INDL) ontology. The aim of INDL is to provide technology independent descriptions of computing infrastructures, including the physical resources as w

  12. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  13. Gearbox Reliability Collaborative Phase 1 and 2: Testing and Modeling Results; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Guo, Y.; LaCava, W.; Link, H.; McNiff, B.

    2012-05-01

    The Gearbox Reliability Collaborative (GRC) investigates root causes of wind turbine gearbox premature failures and validates design assumptions that affect gearbox reliability using a combined testing and modeling approach. Knowledge gained from the testing and modeling of the GRC gearboxes builds an understanding of how the selected loads and events translate into internal responses of three-point mounted gearboxes. This paper presents some testing and modeling results of the GRC research during Phase 1 and 2. Non-torque loads from the rotor including shaft bending and thrust, traditionally assumed to be uncoupled with gearbox, affect gear and bearing loads and resulting gearbox responses. Bearing clearance increases bearing loads and causes cyclic loading, which could contribute to a reduced bearing life. Including flexibilities of key drivetrain subcomponents is important in order to reproduce the measured gearbox response during the tests using modeling approaches.

  14. A Mixed Approach for Modeling Blood Flow in Brain Microcirculation

    Science.gov (United States)

    Peyrounette, M.; Sylvie, L.; Davit, Y.; Quintard, M.

    2014-12-01

    We have previously demonstrated [1] that the vascular system of the healthy human brain cortex is a superposition of two structural components, each corresponding to a different spatial scale. At small-scale, the vascular network has a capillary structure, which is homogeneous and space-filling over a cut-off length. At larger scale, veins and arteries conform to a quasi-fractal branched structure. This structural duality is consistent with the functional duality of the vasculature, i.e. distribution and exchange. From a modeling perspective, this can be viewed as the superposition of: (a) a continuum model describing slow transport in the small-scale capillary network, characterized by a representative elementary volume and effective properties; and (b) a discrete network approach [2] describing fast transport in the arterial and venous network, which cannot be homogenized because of its fractal nature. This problematic is analogous to modeling problems encountered in geological media, e.g, in petroleum engineering, where fast conducting channels (wells or fractures) are embedded in a porous medium (reservoir rock). An efficient method to reduce the computational cost of fractures/continuum simulations is to use relatively large grid blocks for the continuum model. However, this also makes it difficult to accurately couple both structural components. In this work, we solve this issue by adapting the "well model" concept used in petroleum engineering [3] to brain specific 3-D situations. We obtain a unique linear system of equations describing the discrete network, the continuum and the well model coupling. Results are presented for realistic geometries and compared with a non-homogenized small-scale network model of an idealized periodic capillary network of known permeability. [1] Lorthois & Cassot, J. Theor. Biol. 262, 614-633, 2010. [2] Lorthois et al., Neuroimage 54 : 1031-1042, 2011. [3] Peaceman, SPE J. 18, 183-194, 1978.

  15. Thin inclusion approach for modelling of heterogeneous conducting materials

    Energy Technology Data Exchange (ETDEWEB)

    Lavrov, Nikolay [Davenport University, 4801 Oakman Boulevard, Dearborn, MI 48126 (United States); Smirnova, Alevtina; Gorgun, Haluk; Sammes, Nigel [University of Connecticut, Department of Materials Science and Engineering, Connecticut Global Fuel Center, 44 Weaver Road, Unit 5233, Storrs, CT 06269 (United States)

    2006-04-21

    Experimental data show that heterogeneous nanostructure of solid oxide and polymer electrolyte fuel cells could be approximated as an infinite set of fiber-like or penny-shaped inclusions in a continuous medium. Inclusions can be arranged in a cluster mode and regular or random order. In the newly proposed theoretical model of nanostructured material, the most attention is paid to the small aspect ratio of structural elements as well as to some model problems of electrostatics. The proposed integral equation for electric potential caused by the charge distributed over the single circular or elliptic cylindrical conductor of finite length, as a single unit of a nanostructured material, has been asymptotically simplified for the small aspect ratio and solved numerically. The result demonstrates that surface density changes slightly in the middle part of the thin domain and has boundary layers localized near the edges. It is anticipated, that contribution of boundary layer solution to the surface density is significant and cannot be governed by classic equation for smooth linear charge. The role of the cross-section shape is also investigated. Proposed approach is sufficiently simple, robust and allows extension to either regular or irregular system of various inclusions. This approach can be used for the development of the system of conducting inclusions, which are commonly present in nanostructured materials used for solid oxide and polymer electrolyte fuel cell (PEMFC) materials. (author)

  16. An optimization approach to kinetic model reduction for combustion chemistry

    CERN Document Server

    Lebiedz, Dirk

    2013-01-01

    Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...

  17. Hybrid Modelling Approach to Prairie hydrology: Fusing Data-driven and Process-based Hydrological Models

    Science.gov (United States)

    Mekonnen, B.; Nazemi, A.; Elshorbagy, A.; Mazurek, K.; Putz, G.

    2012-04-01

    Modeling the hydrological response in prairie regions, characterized by flat and undulating terrain, and thus, large non-contributing areas, is a known challenge. The hydrological response (runoff) is the combination of the traditional runoff from the hydrologically contributing area and the occasional overflow from the non-contributing area. This study provides a unique opportunity to analyze the issue of fusing the Soil and Water Assessment Tool (SWAT) and Artificial Neural Networks (ANNs) in a hybrid structure to model the hydrological response in prairie regions. A hybrid SWAT-ANN model is proposed, where the SWAT component and the ANN module deal with the effective (contributing) area and the non-contributing area, respectively. The hybrid model is applied to the case study of Moose Jaw watershed, located in southern Saskatchewan, Canada. As an initial exploration, a comparison between ANN and SWAT models is established based on addressing the daily runoff (streamflow) prediction accuracy using multiple error measures. This is done to identify the merits and drawbacks of each modeling approach. It has been found out that the SWAT model has better performance during the low flow periods but with degraded efficiency during periods of high flows. The case is different for the ANN model as ANNs exhibit improved simulation during high flow periods but with biased estimates during low flow periods. The modelling results show that the new hybrid SWAT-ANN model is capable of exploiting the strengths of both SWAT and ANN models in an integrated framrwork. The new hybrid SWAT-ANN model simulates daily runoff quite satisfactorily with NSE measures of 0.80 and 0.83 during calibration and validation periods, respectively. Furthermore, an experimental assessment was performed to identify the effects of the ANN training method on the performance of the hybrid model as well as the parametric identifiability. Overall, the results obtained in this study suggest that the fusion

  18. Molecular Modeling Approach to Cardiovascular Disease Targetting

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Akula,

    2010-05-01

    Full Text Available Cardiovascular disease, including stroke, is the leading cause of illness and death in the India. A number of studies have shown that inflammation of blood vessels is one of the major factors that increase the incidence of heart diseases, including arteriosclerosis (clogging of the arteries, stroke and myocardial infraction or heart attack. Studies have associated obesity and other components of metabolic syndrome, cardiovascular risk factors, with lowgradeinflammation. Furthermore, some findings suggest that drugs commonly prescribed to the lower cholesterol also reduce this inflammation, suggesting an additional beneficial effect of the stains. The recent development of angiotensin 11 (Ang11 receptor antagonists has enabled to improve significantly the tolerability profile of thisgroup of drugs while maintaining a high clinical efficacy. ACE2 is expressed predominantly in the endothelium and in renal tubular epithelium, and it thus may be an import new cardiovascular target. In the present study we modeled the structure of ACE and designed an inhibitor through using ARGUS lab and the validation of the Drug molecule is done basing on QSAR properties and Cache for this protein through CADD.

  19. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  20. Replacement model of city bus: A dynamic programming approach

    Science.gov (United States)

    Arifin, Dadang; Yusuf, Edhi

    2017-06-01

    This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.

  1. Data Analysis A Model Comparison Approach, Second Edition

    CERN Document Server

    Judd, Charles M; Ryan, Carey S

    2008-01-01

    This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T

  2. Higher plant modelling for life support applications: first results of a simple mechanistic model

    Science.gov (United States)

    Hezard, Pauline; Dussap, Claude-Gilles; Sasidharan L, Swathy

    2012-07-01

    In the case of closed ecological life support systems, the air and water regeneration and food production are performed using microorganisms and higher plants. Wheat, rice, soybean, lettuce, tomato or other types of eatable annual plants produce fresh food while recycling CO2 into breathable oxygen. Additionally, they evaporate a large quantity of water, which can be condensed and used as potable water. This shows that recycling functions of air revitalization and food production are completely linked. Consequently, the control of a growth chamber for higher plant production has to be performed with efficient mechanistic models, in order to ensure a realistic prediction of plant behaviour, water and gas recycling whatever the environmental conditions. Purely mechanistic models of plant production in controlled environments are not available yet. This is the reason why new models must be developed and validated. This work concerns the design and test of a simplified version of a mathematical model coupling plant architecture and mass balance purposes in order to compare its results with available data of lettuce grown in closed and controlled chambers. The carbon exchange rate, water absorption and evaporation rate, biomass fresh weight as well as leaf surface are modelled and compared with available data. The model consists of four modules. The first one evaluates plant architecture, like total leaf surface, leaf area index and stem length data. The second one calculates the rate of matter and energy exchange depending on architectural and environmental data: light absorption in the canopy, CO2 uptake or release, water uptake and evapotranspiration. The third module evaluates which of the previous rates is limiting overall biomass growth; and the last one calculates biomass growth rate depending on matter exchange rates, using a global stoichiometric equation. All these rates are a set of differential equations, which are integrated with time in order to provide

  3. Modeling tropical river runoff:A time dependent approach

    Institute of Scientific and Technical Information of China (English)

    Rashmi Nigam; Sudhir Nigam; Sushil K.Mittal

    2014-01-01

    Forecasting of rainfall and subsequent river runoff is important for many operational problems and applications related to hydrol-ogy. Modeling river runoff often requires rigorous mathematical analysis of vast historical data to arrive at reasonable conclusions. In this paper we have applied the stochastic method to characterize and predict river runoff of the perennial Kulfo River in south-ern Ethiopia. The time series analysis based auto regressive integrated moving average (ARIMA) approach is applied to mean monthly runoff data with 10 and 20 years spans. The varying length of the input runoff data is shown to influence the forecasting efficiency of the stochastic process. Preprocessing of the runoff time series data indicated that the data do not follow a seasonal pattern. Our forecasts were made using parsimonious non seasonal ARIMA models and the results were compared to actual 10-year and 20-year mean monthly runoff data of the Kulfo River. Our results indicate that river runoff forecasts based upon the 10-year data are more accurate and efficient than the model based on the 20-year time series.

  4. Simulating lightning into the RAMS model: implementation and preliminary results

    Directory of Open Access Journals (Sweden)

    S. Federico

    2014-05-01

    Full Text Available This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS. The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  5. Modeling air quality over China: Results from the Panda project

    Science.gov (United States)

    Katinka Petersen, Anna; Bouarar, Idir; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Lili; Wang, Xuemei

    2015-04-01

    China faces strong air pollution problems related to rapid economic development in the past decade and increasing demand for energy. Air quality monitoring stations often report high levels of particle matter and ozone all over the country. Knowing its long-term health impacts, air pollution became then a pressing problem not only in China but also in other Asian countries. The PANDA project is a result of cooperation between scientists from Europe and China who joined their efforts for a better understanding of the processes controlling air pollution in China, improve methods for monitoring air quality and elaborate indicators in support of European and Chinese policies. A modeling system of air pollution is being setup within the PANDA project and include advanced global (MACC, EMEP) and regional (WRF-Chem, EMEP) meteorological and chemical models to analyze and monitor air quality in China. The poster describes the accomplishments obtained within the first year of the project. Model simulations for January and July 2010 are evaluated with satellite measurements (SCIAMACHY NO2 and MOPITT CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) observed at several surface stations in China. Using the WRF-Chem model, we investigate the sensitivity of the model performance to emissions (MACCity, HTAPv2), horizontal resolution (60km, 20km) and choice of initial and boundary conditions.

  6. Further results on stabilization for interval time-delay systems via new integral inequality approach.

    Science.gov (United States)

    Li, Zhichen; Bai, Yan; Huang, Congzhi; Yan, Huaicheng

    2017-05-01

    This paper investigates the stability and stabilization problems for interval time-delay systems. By introducing a new delay partitioning approach, various Lyapunov-Krasovskii functionals with triple-integral terms are established to make full use of system information. In order to reduce the conservatism, improved integral inequalities are developed for estimation of double integrals, which show remarkable outperformance over the Jensen and Wirtinger ones. Particularly, the relationship between the time-delay and each subinterval is taken into consideration. The resulting stability criteria are less conservative than some recent methods. Based on the derived condition, the state-feedback controller design approach is also given. Finally, the numerical examples and the application to inverted pendulum system are provided to illustrate the effectiveness of the proposed approaches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction betw

  8. Modelling and Generating Ajax Applications: A Model-Driven Approach

    NARCIS (Netherlands)

    Gharavi, V.; Mesbah, A.; Van Deursen, A.

    2008-01-01

    Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction

  9. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  10. Physiological fidelity or model parsimony? The relative performance of reverse-toxicokinetic modeling approaches.

    Science.gov (United States)

    Rowland, Michael A; Perkins, Edward J; Mayo, Michael L

    2017-03-11

    Physiologically-based toxicokinetic (PBTK) models are often developed to facilitate in vitro to in vivo extrapolation (IVIVE) using a top-down, compartmental approach, favoring architectural simplicity over physiological fidelity despite the lack of general guidelines relating model design to dynamical predictions. Here we explore the impact of design choice (high vs. low fidelity) on chemical distribution throughout an animal's organ system. We contrast transient dynamics and steady states of three previously proposed PBTK models of varying complexity in response to chemical exposure. The steady states for each model were determined analytically to predict exposure conditions from tissue measurements. Steady state whole-body concentrations differ between models, despite identical environmental conditions, which originates from varying levels of physiological fidelity captured by the models. These differences affect the relative predictive accuracy of the inverted models used in exposure reconstruction to link effects-based exposure data with whole-organism response thresholds obtained from in vitro assay measurements. Our results demonstrate how disregarding physiological fideltiy in favor of simpler models affects the internal dynamics and steady state estimates for chemical accumulation within tissues, which, in turn, poses significant challenges for the exposure reconstruction efforts that underlie many IVIVE methods. Developing standardized systems-level models for ecological organisms would not only ensure predictive consistency among future modeling studies, but also ensure pragmatic extrapolation of in vivo effects from in vitro data or modeling exposure-response relationships.

  11. Exact results for the one dimensional asymmetric exclusion model

    Science.gov (United States)

    Derrida, B.; Evans, M. R.; Hakim, V.; Pasquier, V.

    1993-11-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices.

  12. Exact results for the one dimensional asymmetric exclusion model

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Evans, M.R.; Pasquier, V. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Service de Physique Theorique; Hakim, V. [Ecole Normale Superieure, 75 - Paris (France)

    1993-12-31

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author).

  13. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  14. Analytical results for a three-phase traffic model.

    Science.gov (United States)

    Huang, Ding-wei

    2003-10-01

    We study analytically a cellular automaton model, which is able to present three different traffic phases on a homogeneous highway. The characteristics displayed in the fundamental diagram can be well discerned by analyzing the evolution of density configurations. Analytical expressions for the traffic flow and shock speed are obtained. The synchronized flow in the intermediate-density region is the result of aggressive driving scheme and determined mainly by the stochastic noise.

  15. A novel approach to modeling and diagnosing the cardiovascular system

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)

    1995-07-01

    A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.

  16. Nanog Dynamics in Mouse Embryonic Stem Cells: Results from Systems Biology Approaches

    Directory of Open Access Journals (Sweden)

    Lucia Marucci

    2017-01-01

    Full Text Available Mouse embryonic stem cells (mESCs, derived from the inner cell mass of the blastocyst, are pluripotent stem cells having self-renewal capability and the potential of differentiating into every cell type under the appropriate culture conditions. An increasing number of reports have been published to uncover the molecular mechanisms that orchestrate pluripotency and cell fate specification using combined computational and experimental methodologies. Here, we review recent systems biology approaches to describe the causes and functions of gene expression heterogeneity and complex temporal dynamics of pluripotency markers in mESCs under uniform culture conditions. In particular, we focus on the dynamics of Nanog, a key regulator of the core pluripotency network and of mESC fate. We summarize the strengths and limitations of different experimental and modeling approaches and discuss how various strategies could be used.

  17. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  18. Mathematical models for therapeutic approaches to control HIV disease transmission

    CERN Document Server

    Roy, Priti Kumar

    2015-01-01

    The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...

  19. Asteroid modeling for testing spacecraft approach and landing.

    Science.gov (United States)

    Martin, Iain; Parkes, Steve; Dunstan, Martin; Rowell, Nick

    2014-01-01

    Spacecraft exploration of asteroids presents autonomous-navigation challenges that can be aided by virtual models to test and develop guidance and hazard-avoidance systems. Researchers have extended and applied graphics techniques to create high-resolution asteroid models to simulate cameras and other spacecraft sensors approaching and descending toward asteroids. A scalable model structure with evenly spaced vertices simplifies terrain modeling, avoids distortion at the poles, and enables triangle-strip definition for efficient rendering. To create the base asteroid models, this approach uses two-phase Poisson faulting and Perlin noise. It creates realistic asteroid surfaces by adding both crater models adapted from lunar terrain simulation and multiresolution boulders. The researchers evaluated the virtual asteroids by comparing them with real asteroid images, examining the slope distributions, and applying a surface-relative feature-tracking algorithm to the models.

  20. A model-driven approach to information security compliance

    Science.gov (United States)

    Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena

    2017-06-01

    The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.

  1. Heuristic approaches to models and modeling in systems biology

    NARCIS (Netherlands)

    MacLeod, Miles

    2016-01-01

    Prediction and control sufficient for reliable medical and other interventions are prominent aims of modeling in systems biology. The short-term attainment of these goals has played a strong role in projecting the importance and value of the field. In this paper I identify the standard models must m

  2. A Model Management Approach for Co-Simulation Model Evaluation

    NARCIS (Netherlands)

    Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2011-01-01

    Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software

  3. A computational toy model for shallow landslides: Molecular dynamics approach

    Science.gov (United States)

    Martelloni, Gianluca; Bagnoli, Franco; Massaro, Emanuele

    2013-09-01

    The aim of this paper is to propose a 2D computational algorithm for modeling the triggering and propagation of shallow landslides caused by rainfall. We used a molecular dynamics (MD) approach, similar to the discrete element method (DEM), that is suitable to model granular material and to observe the trajectory of a single particle, so to possibly identify its dynamical properties. We consider that the triggering of shallow landslides is caused by the decrease of the static friction along the sliding surface due to water infiltration by rainfall. Thence the triggering is caused by the two following conditions: (a) a threshold speed of the particles and (b) a condition on the static friction, between the particles and the slope surface, based on the Mohr-Coulomb failure criterion. The latter static condition is used in the geotechnical model to estimate the possibility of landslide triggering. The interaction force between particles is modeled, in the absence of experimental data, by means of a potential similar to the Lennard-Jones one. The viscosity is also introduced in the model and for a large range of values of the model's parameters, we observe a characteristic velocity pattern, with acceleration increments, typical of real landslides. The results of simulations are quite promising: the energy and time triggering distribution of local avalanches show a power law distribution, analogous to the observed Gutenberg-Richter and Omori power law distributions for earthquakes. Finally, it is possible to apply the method of the inverse surface displacement velocity [4] for predicting the failure time.

  4. Ontological Approach for Effective Generation of Concept Based User Profiles to Personalize Search Results

    Directory of Open Access Journals (Sweden)

    R. S.D. Wahidabanu

    2012-01-01

    Full Text Available Problem statement: Ontological user profile generation was a semantic approach to derive richer concept based user profiles. It depends on the semantic relationship of concepts. This study focuses on ontology to derive concept oriented user profile based on user search queries and clicked documents.This study proposes concept based on topic ontology which derives the concept based user profiles more independently. It was possible to improve the search engine processes more efficiently. Approach: This process consists of individual user’s interests, topical categories of user interests and identifies the relationship among the concepts. The proposed approach was based on topic ontology for concept based user profile generation from search engine logs. Spreading activation algorithm was used to optimize the relevance of search engine results. Topic ontology is constructed to identify the user interest by assigning activation values and explore the topics similarity of user preferences. Results: To update and maintain the interest scores, spreading activation algorithm was proposed. User interest may change over the period of time which was reflected to user profiles. According to profile changes, search engine was personalized by assigning interest scores and weight to the topics. Conclusion: Experiments illustrate the efficacy of proposed approach and with the help of topic ontology user preferences can be identified correctly. It improves the quality of the search engine personalization by identifying the user’s precise needs.

  5. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  6. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    Science.gov (United States)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  7. Modeling quasi-static poroelastic propagation using an asymptotic approach

    Energy Technology Data Exchange (ETDEWEB)

    Vasco, D.W.

    2007-11-01

    Since the formulation of poroelasticity (Biot(1941)) and its reformulation (Rice & Cleary(1976)), there have been many efforts to solve the coupled system of equations. Perhaps because of the complexity of the governing equations, most of the work has been directed towards finding numerical solutions. For example, Lewis and co-workers published early papers (Lewis & Schrefler(1978); Lewis et al.(1991)Lewis, Schrefler, & Simoni) concerned with finite-element methods for computing consolidation, subsidence, and examining the importance of coupling. Other early work dealt with flow in a deformable fractured medium (Narasimhan & Witherspoon 1976); Noorishad et al.(1984)Noorishad, Tsang, & Witherspoon. This effort eventually evolved into a general numerical approach for modeling fluid flow and deformation (Rutqvist et al.(2002)Rutqvist, Wu, Tsang, & Bodvarsson). As a result of this and other work, numerous coupled, computer-based algorithms have emerged, typically falling into one of three categories: one-way coupling, loose coupling, and full coupling (Minkoff et al.(2003)Minkoff, Stone, Bryant, Peszynska, & Wheeler). In one-way coupling the fluid flow is modeled using a conventional numerical simulator and the resulting change in fluid pressures simply drives the deformation. In loosely coupled modeling distinct geomechanical and fluid flow simulators are run for a sequence of time steps and at the conclusion of each step information is passed between the simulators. In full coupling, the fluid flow and geomechanics equations are solved simultaneously at each time step (Lewis & Sukirman(1993); Lewis & Ghafouri(1997); Gutierrez & Lewis(2002)). One disadvantage of a purely numerical approach to solving the governing equations of poroelasticity is that it is not clear how the various parameters interact and influence the solution. Analytic solutions have an advantage in that respect; the relationship between the medium and fluid properties is clear from the form of the

  8. Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario

    Science.gov (United States)

    Tobias, Guillermo; Jesús García, Adrián

    2016-04-01

    The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of

  9. Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation

    Science.gov (United States)

    Weeden, C.; Hejduk, M.

    At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or

  10. Titan Chemistry: Results From A Global Climate Model

    Science.gov (United States)

    Wilson, Eric; West, R. A.; Friedson, A. J.; Oyafuso, F.

    2008-09-01

    We present results from a 3-dimesional global climate model of Titan's atmosphere and surface. This model, a modified version of NCAR's CAM-3 (Community Atmosphere Model), has been optimized for analysis of Titan's lower atmosphere and surface. With the inclusion of forcing from Saturn's gravitational tides, interaction from the surface, transfer of longwave and shortwave radiation, and parameterization of haze properties, constrained by Cassini observations, a dynamical field is generated, which serves to advect 14 long-lived species. The concentrations of these chemical tracers are also affected by 82 chemical reactions and the photolysis of 21 species, based on the Wilson and Atreya (2004) model, that provide sources and sinks for the advected species along with 23 additional non-advected radicals. In addition, the chemical contribution to haze conversion is parameterized along with the microphysical processes that serve to distribute haze opacity throughout the atmosphere. References Wilson, E.H. and S.K. Atreya, J. Geophys. Res., 109, E06002, 2004.

  11. Why Does a Kronecker Model Result in Misleading Capacity Estimates?

    CERN Document Server

    Raghavan, Vasanthan; Sayeed, Akbar M

    2008-01-01

    Many recent works that study the performance of multi-input multi-output (MIMO) systems in practice assume a Kronecker model where the variances of the channel entries, upon decomposition on to the transmit and the receive eigen-bases, admit a separable form. Measurement campaigns, however, show that the Kronecker model results in poor estimates for capacity. Motivated by these observations, a channel model that does not impose a separable structure has been recently proposed and shown to fit the capacity of measured channels better. In this work, we show that this recently proposed modeling framework can be viewed as a natural consequence of channel decomposition on to its canonical coordinates, the transmit and/or the receive eigen-bases. Using tools from random matrix theory, we then establish the theoretical basis behind the Kronecker mismatch at the low- and the high-SNR extremes: 1) Sparsity of the dominant statistical degrees of freedom (DoF) in the true channel at the low-SNR extreme, and 2) Non-regul...

  12. New DNS and modeling results for turbulent pipe flow

    Science.gov (United States)

    Johansson, Arne; El Khoury, George; Grundestam, Olof; Schlatter, Philipp; Brethouwer, Geert; Linne Flow Centre Team

    2013-11-01

    The near-wall region of turbulent pipe and channel flows (as well as zero-pressure gradient boundary layers) have been shown to exhibit a very high degree of similarity in terms of all statistical moments and many other features, while even the mean velocity profile in the two cases exhibits significant differences between in the outer region. The wake part of the profile, i.e. the deviation from the log-law, in the outer region is of substantially larger amplitude in pipe flow as compared to channel flow (although weaker than in boundary layer flow). This intriguing feature has been well known but has no simple explanation. Model predictions typically give identical results for the two flows. We have analyzed a new set of DNS for pipe and channel flows (el Khoury et al. 2013, Flow, Turbulence and Combustion) for friction Reynolds numbers up to 1000 and made comparing calculations with differential Reynolds stress models (DRSM). We have strong indications that the key factor behind the difference in mean velocity in the outer region can be coupled to differences in the turbulent diffusion in this region. This is also supported by DRSM results, where interesting differences are seen depending on the sophistication of modeling the turbulent diffusion coefficient.

  13. Some Results on Optimal Dividend Problem in Two Risk Models

    Directory of Open Access Journals (Sweden)

    Shuaiqi Zhang

    2010-12-01

    Full Text Available The compound Poisson risk model and the compound Poisson risk model perturbed by diffusion are considered in the presence of a dividend barrier with solvency constraints. Moreover, it extends the known result due to [1]. Ref. [1] finds the optimal dividend policy is of a barrier type for a jump-diffusion model with exponentially distributed jumps. In this paper, it turns out that there can be two different solutions depending on the model’s parameters. Furthermore, an interesting result is given: the proportional transaction cost has no effect on the dividend barrier. The objective of the corporation is to maximize the cumulative expected discounted dividends payout with solvency constraints before the time of ruin. It is well known that under some reasonable assumptions, optimal dividend strategy is a barrier strategy, i.e., there is a level b_{1}(b_{2} so that whenever surplus goes above the level b_{1}(b_{2}, the excess is paid out as dividends. However, the optimal level b_{1}(b_{2} may be unacceptably low from a solvency point of view. Therefore, some constraints should imposed on an insurance company such as to pay out dividends unless the surplus has reached a level b^{1}_{c}>b_{1}(b^2_{c}>b_{2} . We show that in this case a barrier strategy at b^{1}_{c}(b^2_{c} is optimal.

  14. Modeling results for the ITER cryogenic fore pump

    Science.gov (United States)

    Zhang, D. S.; Miller, F. K.; Pfotenhauer, J. M.

    2014-01-01

    The cryogenic fore pump (CFP) is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. It is presented here a 1D numerical model. The results include comparison with the experimental data of pure hydrogen flow and a prediction for hydrogen flow with trace helium. An advanced 2D model and detailed explanation of the Group-Member technique are to be presented in following papers.

  15. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  16. Modeling Alaska boreal forests with a controlled trend surface approach

    Science.gov (United States)

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  17. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Science.gov (United States)

    Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching…

  18. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...

  19. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  20. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  1. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  2. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  3. A Bayesian Approach for Analyzing Longitudinal Structural Equation Models

    Science.gov (United States)

    Song, Xin-Yuan; Lu, Zhao-Hua; Hser, Yih-Ing; Lee, Sik-Yum

    2011-01-01

    This article considers a Bayesian approach for analyzing a longitudinal 2-level nonlinear structural equation model with covariates, and mixed continuous and ordered categorical variables. The first-level model is formulated for measures taken at each time point nested within individuals for investigating their characteristics that are dynamically…

  4. An Empirical-Mathematical Modelling Approach to Upper Secondary Physics

    Science.gov (United States)

    Angell, Carl; Kind, Per Morten; Henriksen, Ellen K.; Guttersrud, Oystein

    2008-01-01

    In this paper we describe a teaching approach focusing on modelling in physics, emphasizing scientific reasoning based on empirical data and using the notion of multiple representations of physical phenomena as a framework. We describe modelling activities from a project (PHYS 21) and relate some experiences from implementation of the modelling…

  5. An Alternative Approach for Nonlinear Latent Variable Models

    Science.gov (United States)

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  6. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  7. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode

  8. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  9. New Cutting Force Modeling Approach for Flat End Mill

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new mechanistic cutting force model for flat end milling using the instantaneous cutting force coefficients is proposed. An in-depth analysis shows that the total cutting forces can be separated into two terms: a nominal component independent of the runout and a perturbation component induced by the runout. The instantaneous value of the nominal component is used to calibrate the cutting force coefficients. With the help of the perturbation component and the cutting force coeffcients obtained above, the cutter runout is identified.Based on simulation and experimental results, the validity of the identification approach is demonstrated. The advantage of the proposed method lies in that the calibration performed with data of one cutting test under a specific regime can be applied for a great range of cutting conditions.

  10. Hamilton-Jacobi approach for quasi-exponential inflation: predictions and constraints after Planck 2015 results

    Energy Technology Data Exchange (ETDEWEB)

    Videla, Nelson [FCFM, Universidad de Chile, Departamento de Fisica, Santiago (Chile)

    2017-03-15

    In the present work we study the consequences of considering an inflationary universe model in which the Hubble rate has a quasi-exponential dependence in the inflaton field, given by H(φ) = H{sub inf} exp[((φ)/(m{sub p}))/(p(1+(φ)/(m{sub p})))]. We analyze the inflation dynamics under the Hamilton-Jacobi approach, which allows us to consider H(φ), rather than V(φ), as the fundamental quantity to be specified. By comparing the theoretical predictions of the model together with the allowed contour plots in the n{sub s} - r plane and the amplitude of primordial scalar perturbations from the latest Planck data, the parameters charactering this model are constrained. The model predicts values for the tensor-to-scalar ratio r and for the running of the scalar spectral index dn{sub s}/d ln k consistent with the current bounds imposed by Planck, and we conclude that the model is viable. (orig.)

  11. Multi-Model approach to reconstruct the Mediterranean Freshwater Evolution

    Science.gov (United States)

    Simon, Dirk; Marzocchi, Alice; Flecker, Rachel; Lunt, Dan; Hilgen, Frits; Meijer, Paul

    2016-04-01

    Today the Mediterranean Sea is isolated from the global ocean by the Strait of Gibraltar. This restricted nature causes the Mediterranean basin to react more sensitively to climatic and tectonic related phenomena than the global ocean. Not just eustatic sea-level and regional river run-off, but also gateway tectonics and connectivity between sub-basins are leaving an enhanced fingerprint in its geological record. To understand its evolution, it is crucial to understand how these different effects are coupled. The Miocene-Pliocene sedimentary record of the Mediterranean shows alternations in composition and colour and has been astronomically tuned. Around the Miocene-Pliocene Boundary the most extreme changes occur in the Mediterranean Sea. About 6% of the salt in the global ocean deposited in the Mediterranean Region, forming an approximately 2 km thick salt layer, which is still present today. This extreme event is named the Messinian Salinity Crisis (MSC, 5.97-5.33 Ma). The gateway and climate evolution is not well constrained for this time, which makes it difficult to distinguish which of the above mentioned drivers might have triggered the MSC. We, therefore, decided to tackle this problem via a multi-model approach: (1) We calculate the Mediterranean freshwater evolution via 30 atmosphere-ocean-vegetation simulations (using HadCM3L), to which we fitted to a function, using a regression model. This allows us to directly relate the orbital curves to evaporation, precipitation and run off. The resulting freshwater evolution can be directly correlated to other sedimentary and proxy records in the late Miocene. (2) By feeding the new freshwater evolution curve into a box/budget model we can predict the salinity and strontium evolution of the Mediterranean for a certain Atlantic-Mediterranean gateway. (3) By comparing these results to the known salinity thresholds of gypsum and halite saturation of sea water, but also to the late Miocene Mediterranean strontium

  12. Model unspecific search in CMS. Results at 8 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In the year 2012, CMS collected a total data set of approximately 20 fb{sup -1} in proton-proton collisions at √(s)=8 TeV. Dedicated searches for physics beyond the standard model are commonly designed with the signatures of a given theoretical model in mind. While this approach allows for an optimised sensitivity to the sought-after signal, it may cause unexpected phenomena to be overlooked. In a complementary approach, the Model Unspecific Search in CMS (MUSiC) analyses CMS data in a general way. Depending on the reconstructed final state objects (e.g. electrons), collision events are sorted into classes. In each of the classes, the distributions of selected kinematic variables are compared to standard model simulation. An automated statistical analysis is performed to quantify the agreement between data and prediction. In this talk, the analysis concept is introduced and selected results of the analysis of the 2012 CMS data set are presented.

  13. A multilevel approach to modeling of porous bioceramics

    Science.gov (United States)

    Mikushina, Valentina A.; Sidorenko, Yury N.

    2015-10-01

    The paper is devoted to discussion of multiscale models of heterogeneous materials using principles. The specificity of approach considered is the using of geometrical model of composites representative volume, which must be generated with taking the materials reinforcement structure into account. In framework of such model may be considered different physical processes which have influence on the effective mechanical properties of composite, in particular, the process of damage accumulation. It is shown that such approach can be used to prediction the value of composite macroscopic ultimate strength. As an example discussed the particular problem of the study the mechanical properties of biocomposite representing porous ceramics matrix filled with cortical bones tissue.

  14. Gray-box modelling approach for description of storage tunnel

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Jacob

    1999-01-01

    of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented....... The model in the present paper provides on-line information on overflow volumes, pumping capacities, and remaining storage capacities. A linear overflow relation is found, differing significantly from the traditional deterministic modeling approach. The linearity of the formulas is explained by the inertia...

  15. A study of multidimensional modeling approaches for data warehouse

    Science.gov (United States)

    Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani

    2016-08-01

    Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.

  16. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  17. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  18. BUSINESS MODEL IN ELECTRICITY INDUSTRY USING BUSINESS MODEL CANVAS APPROACH; THE CASE OF PT. XYZ

    Directory of Open Access Journals (Sweden)

    Achmad Arief Wicaksono

    2017-01-01

    Full Text Available The magnitude of opportunities and project values of electricity system in Indonesia encourages PT. XYZ to develop its business in electrical sector which requires business development strategies. This study aims to identify company's business model using Business Model Canvas approach, formulate business development strategy alternatives, and determine the prioritized business development strategy which is appropriate to the manufacturing business model for PT. XYZ. This study utilized a descriptive approach and the nine elements of the Business Model Canvas. Alternative formulation and priority determination of the strategies were obtained by using Strengths, Weaknesses, Opportunities, Threats (SWOT analysis and pairwise comparison. The results of this study are the improvement of Business Model Canvas on the elements of key resources, key activities, key partners and customer segment. In terms of SWOT analysis on the nine elements of the Business Model Canvas for the first business development, the results show an expansion on the power plant construction project as the main contractor, an increase in sales in its core business in supporting equipment industry of oil and gas,  a development in the second business i.e. an investment in the electricity sector as an independent renewable emery-based power producer. On its first business development, PT. XYZ selected three Business Model Canvas elements which become the priorities of the company i.e. key resources weighing 0.252, key activities weighing 0.240, and key partners weighing 0.231. On its second business development, the company selected three elements to become their the priorities i.e. key partners weighing 0.225, customer segments weighing 0.217, and key resources weighing 0.215.Keywords: business model canvas, SWOT, pairwise comparison, business model

  19. Social learning in Models and Cases - an Interdisciplinary Approach

    Science.gov (United States)

    Buhl, Johannes; De Cian, Enrica; Carrara, Samuel; Monetti, Silvia; Berg, Holger

    2016-04-01

    Our paper follows an interdisciplinary understanding of social learning. We contribute to the literature on social learning in transition research by bridging case-oriented research and modelling-oriented transition research. We start by describing selected theories on social learning in innovation, diffusion and transition research. We present theoretical understandings of social learning in techno-economic and agent-based modelling. Then we elaborate on empirical research on social learning in transition case studies. We identify and synthetize key dimensions of social learning in transition case studies. In the following we bridge between more formal and generalising modelling approaches towards social learning processes and more descriptive, individualising case study approaches by interpreting the case study analysis into a visual guide on functional forms of social learning typically identified in the cases. We then try to exemplarily vary functional forms of social learning in integrated assessment models. We conclude by drawing the lessons learned from the interdisciplinary approach - methodologically and empirically.

  20. Learning the Task Management Space of an Aircraft Approach Model

    Science.gov (United States)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  1. An Alternative Approach for Determining Photoionization Rate in H2+: Numerical Results

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yu; ZHANG Gui-Zhong; XIANG Wang-Hua; W.T. Hill Ⅲ

    2005-01-01

    @@ We present an alternative approach for determining the photoionization rate of hydrogen molecules under the interaction of intense light, by calculating the spatial overlap integral between the potential function and the time-dependent wavefunction. The suggested method was applied to varying excitation pulse shapes: square envelope and chirped hyperbolic secant envelope. The computed results confirmed that our method was robust and could be extended to general molecular dynamics calculations.

  2. A Dynamic Approach to Modeling Dependence Between Human Failure Events

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory

    2015-09-01

    In practice, most HRA methods use direct dependence from THERP—the notion that error be- gets error, and one human failure event (HFE) may increase the likelihood of subsequent HFEs. In this paper, we approach dependence from a simulation perspective in which the effects of human errors are dynamically modeled. There are three key concepts that play into this modeling: (1) Errors are driven by performance shaping factors (PSFs). In this context, the error propagation is not a result of the presence of an HFE yielding overall increases in subsequent HFEs. Rather, it is shared PSFs that cause dependence. (2) PSFs have qualities of lag and latency. These two qualities are not currently considered in HRA methods that use PSFs. Yet, to model the effects of PSFs, it is not simply a matter of identifying the discrete effects of a particular PSF on performance. The effects of PSFs must be considered temporally, as the PSFs will have a range of effects across the event sequence. (3) Finally, there is the concept of error spilling. When PSFs are activated, they not only have temporal effects but also lateral effects on other PSFs, leading to emergent errors. This paper presents the framework for tying together these dynamic dependence concepts.

  3. Modeling of movement-related potentials using a fractal approach.

    Science.gov (United States)

    Uşakli, Ali Bülent

    2010-06-01

    In bio-signal applications, classification performance depends greatly on feature extraction, which is also the case for electroencephalogram (EEG) based applications. Feature extraction, and consequently classification of EEG signals is not an easy task due to their inherent low signal-to-noise ratios and artifacts. EEG signals can be treated as the output of a non-linear dynamical (chaotic) system in the human brain and therefore they can be modeled by their dimension values. In this study, the variance fractal dimension technique is suggested for the modeling of movement-related potentials (MRPs). Experimental data sets consist of EEG signals recorded during the movements of right foot up, lip pursing and a simultaneous execution of these two tasks. The experimental results and performance tests show that the proposed modeling method can efficiently be applied to MRPs especially in the binary approached brain computer interface applications aiming to assist severely disabled people such as amyotrophic lateral sclerosis patients in communication and/or controlling devices.

  4. Hybrid empirical--theoretical approach to modeling uranium adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W

    2004-05-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K{sub f} parameter is correlated to sediment surface area (r{sup 2}=0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth.

  5. Masked areas in shear peak statistics. A forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  6. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  7. Towards an Integrated Approach to Crowd Analysis and Crowd Synthesis: a Case Study and First Results

    CERN Document Server

    Bandini, Stefania; Vizzari, Giuseppe

    2013-01-01

    Studies related to crowds of pedestrians, both those of theoretical nature and application oriented ones, have generally focused on either the analysis or the synthesis of the phenomena related to the interplay between individual pedestrians, each characterised by goals, preferences and potentially relevant relationships with others, and the environment in which they are situated. The cases in which these activities have been systematically integrated for a mutual benefit are still very few compared to the corpus of crowd related literature. This paper presents a case study of an integrated approach to the definition of an innovative model for pedestrian and crowd simulation (on the side of synthesis) that was actually motivated and supported by the analyses of empirical data acquired from both experimental settings and observations in real world scenarios. In particular, we will introduce a model for the adaptive behaviour of pedestrians that are also members of groups, that strive to maintain their cohesion...

  8. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  9. Geochemical controls on shale groundwaters: Results of reaction path modeling

    Energy Technology Data Exchange (ETDEWEB)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs.

  10. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  11. Building enterprise reuse program--A model-based approach

    Institute of Scientific and Technical Information of China (English)

    梅宏; 杨芙清

    2002-01-01

    Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model--FLP model for reusable component--is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.

  12. Current approaches to model extracellular electrical neural microstimulation

    Directory of Open Access Journals (Sweden)

    Sébastien eJoucla

    2014-02-01

    Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.

  13. Comparison between InfoWorks hydraulic results and a physical model of an urban drainage system.

    Science.gov (United States)

    Rubinato, Matteo; Shucksmith, James; Saul, Adrian J; Shepherd, Will

    2013-01-01

    Urban drainage systems are frequently analysed using hydraulic modelling software packages such as InfoWorks CS or MIKE-Urban. The use of such modelling tools allows the evaluation of sewer capacity and the likelihood and impact of pluvial flood events. Models can also be used to plan major investments such as increasing storage capacity or the implementation of sustainable urban drainage systems. In spite of their widespread use, when applied to flooding the results of hydraulic models are rarely compared with field or laboratory (i.e. physical modelling) data. This is largely due to the time and expense required to collect reliable empirical data sets. This paper describes a laboratory facility which will enable an urban flood model to be verified and generic approaches to be built. Results are presented from the first phase of testing, which compares the sub-surface hydraulic performance of a physical scale model of a sewer network in Yorkshire, UK, with downscaled results from a calibrated 1D InfoWorks hydraulic model of the site. A variety of real rainfall events measured in the catchment over a period of 15 months (April 2008-June 2009) have been both hydraulically modelled and reproduced in the physical model. In most cases a comparison of flow hydrographs generated in both hydraulic and physical models shows good agreement in terms of velocities which pass through the system.

  14. Optimising GPR modelling: A practical, multi-threaded approach to 3D FDTD numerical modelling

    Science.gov (United States)

    Millington, T. M.; Cassidy, N. J.

    2010-09-01

    The demand for advanced interpretational tools has lead to the development of highly sophisticated, computationally demanding, 3D GPR processing and modelling techniques. Many of these methods solve very large problems with stepwise methods that utilise numerically similar functions within iterative computational loops. Problems of this nature are readily parallelised by splitting the computational domain into smaller, independent chunks for direct use on cluster-style, multi-processor supercomputers. Unfortunately, the implications of running such facilities, as well as time investment needed to develop the parallel codes, means that for most researchers, the use of these advanced methods is too impractical. In this paper, we propose an alternative method of parallelisation which exploits the capabilities of the modern multi-core processors (upon which today's desktop PCs are built) by multi-threading the calculation of a problem's individual sub-solutions. To illustrate the approach, we have applied it to an advanced, 3D, finite-difference time-domain (FDTD) GPR modelling tool in which the calculation of the individual vector field components is multi-threaded. To be of practical use, the FDTD scheme must be able to deliver accurate results with short execution times and we, therefore, show that the performance benefits of our approach can deliver runtimes less than half those of the more conventional, serial programming techniques. We evaluate implementations of the technique using different programming languages (e.g., Matlab, Java, C++), which will facilitate the construction of a flexible modelling tool for use in future GPR research. The implementations are compared on a variety of typical hardware platforms, having between one and eight processing cores available, and also a modern Graphical Processing Unit (GPU)-based computer. Our results show that a multi-threaded xyz modelling approach is easy to implement and delivers excellent results when implemented

  15. ITER CS Model Coil and CS Insert Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Martovetsky, N; Michael, P; Minervina, J; Radovinsky, A; Takayasu, M; Thome, R; Ando, T; Isono, T; Kato, T; Nakajima, H; Nishijima, G; Nunoya, Y; Sugimoto, M; Takahashi, Y; Tsuji, H; Bessette, D; Okuno, K; Ricci, M

    2000-09-07

    The Inner and Outer modules of the Central Solenoid Model Coil (CSMC) were built by US and Japanese home teams in collaboration with European and Russian teams to demonstrate the feasibility of a superconducting Central Solenoid for ITER and other large tokamak reactors. The CSMC mass is about 120 t, OD is about 3.6 m and the stored energy is 640 MJ at 46 kA and peak field of 13 T. Testing of the CSMC and the CS Insert took place at Japan Atomic Energy Research Institute (JAERI) from mid March until mid August 2000. This paper presents the main results of the tests performed.

  16. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  17. Model independent analysis of dark energy I: Supernova fitting result

    CERN Document Server

    Gong, Y

    2004-01-01

    The nature of dark energy is a mystery to us. This paper uses the supernova data to explore the property of dark energy by some model independent methods. We first Talyor expanded the scale factor $a(t)$ to find out the deceleration parameter $q_0<0$. This result just invokes the Robertson-Walker metric. Then we discuss several different parameterizations used in the literature. We find that $\\Omega_{\\rm DE0}$ is almost less than -1 at $1\\sigma$ level. We also find that the transition redshift from deceleration phase to acceleration phase is $z_{\\rm T}\\sim 0.3$.

  18. Preliminary results of steel containment vessel model test

    Energy Technology Data Exchange (ETDEWEB)

    Luk, V.K.; Hessheimer, M.F. [Sandia National Labs., Albuquerque, NM (United States); Matsumoto, T.; Komine, K.; Arai, S. [Nuclear Power Engineering Corp., Tokyo (Japan); Costello, J.F. [Nuclear Regulatory Commission, Washington, DC (United States)

    1998-04-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11--12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented.

  19. A Variable Flow Modelling Approach To Military End Strength Planning

    Science.gov (United States)

    2016-12-01

    System Dynamics (SD) model is ideal for strategic analysis as it encompasses all the behaviours of a system and how the behaviours are influenced by...Markov Chain Models Wang describes Markov chain theory as a mathematical tool used to investigate dynamic behaviours of a system in a discrete-time... MODELLING APPROACH TO MILITARY END STRENGTH PLANNING by Benjamin K. Grossi December 2016 Thesis Advisor: Kenneth Doerr Second Reader

  20. New Approaches in Usable Booster System Life Cycle Cost Modeling

    Science.gov (United States)

    2012-01-01

    Lean NPD practices (many) • Lean Production & Operations Practices (many) • Supply Chain Operations Reference ( SCOR ) Model , Best Practices Make Deliver...NEW APPROACHES IN REUSABLE BOOSTER SYSTEM LIFE CYCLE COST MODELING Edgar Zapata National Aeronautics and Space Administration Kennedy Space Center...Kennedy Space Center (KSC) and the Air Force Research Laboratory (AFRL). The work included the creation of a new cost estimating model and an LCC

  1. METHODOLOGICAL APPROACH AND MODEL ANALYSIS FOR IDENTIFICATION OF TOURIST TRENDS

    Directory of Open Access Journals (Sweden)

    Neven Šerić

    2015-06-01

    Full Text Available The draw and diversity of the destination’s offer is an antecedent of the tourism visits growth. The destination supply differentiation is carried through new, specialised tourism products. The usual approach consists of forming specialised tourism products in accordance with the existing tourism destination image. Another approach, prevalent in practice of developed tourism destinations is based on innovating the destination supply through accordance with the global tourism trends. For this particular purpose, it is advisable to choose a monitoring and analysis method of tourism trends. The goal is to determine actual trends governing target markets, differentiating whims from trends during the tourism preseason. When considering the return on investment, modifying the destination’s tourism offer on the basis of a tourism whim is a risky endeavour, indeed. Adapting the destination’s supply to tourism whims can result in a shifted image, one that is unable to ensure a long term interest and tourist vacation growth. With regard to tourism trend research and based on the research conducted, an advisable model for evaluating tourism phenomena is proposed, one that determines whether tourism phenomena is a tourism trend or a tourism whim.

  2. A Unified Approach to Model-Based Planning and Execution

    Science.gov (United States)

    Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)

    2000-01-01

    Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.

  3. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  4. Predicting onset and withdrawal of Indian Summer Monsoon in 2016: results of Tipping elements approach

    Science.gov (United States)

    Surovyatkina, Elena; Stolbova, Veronika; Kurths, Jurgen

    2017-04-01

    started to decrease, and after two days meteorological stations reported 'No rain' in the EG and also in areas located across the subcontinent in the direction from the North Pakistan to the Bay of Bengal. Hence, the date of monsoon withdrawal - October 10-th, predicted 70 days in advance, lies within our prediction interval. Our results show that our method allows predicting a future monsoon, and not only retrospectively or hindcast. In 2016 we predicted of the onset and withdrawal dates of the Southwest monsoon over the Eastern Ghats region in Central India for 40 and 70 days in advance respectively. Our general framework for predicting spatial-temporal critical transitions is applicable for systems of different nature. It allows predicting future from observational data only, when the model of a transition does not exist yet. [1] Stolbova, V., E. Surovyatkina, B. Bookhagen, and J. Kurths (2016): Tipping elements of the Indian monsoon: Prediction of onset and withdrawal. Geophys. Res. Lett., 43, 1-9. [2]https://www.pik-potsdam.de/news/press-releases/indian-monsoon-novel-approach-allows-early-forecasting?set_language=en [3] https://www.pik-potsdam.de/kontakt/pressebuero/fotos/monsoon-withdrawal/view

  5. Results of Anterior Transcallosal Approach to Pediatric Colloid Cysts Original Article¬

    Directory of Open Access Journals (Sweden)

    Turgut Kuytu

    2011-04-01

    Full Text Available Introduction: Colloid cysts represent 0.5-1% of all intracranial neoplasms and 55% of the third ventricular lesions. In this study, we emphasized the principles of treatment in pediatric cases with third venricular colloid cysts treated by using anterior interhemispheric transcallosal approach. Materials and Method: The patients aged 16 years and below with colloid cysts, operated between 2001-2009, were evaluated retrospectively.Results: There were 3 males and 1 female patients aged between 12-16 (mean age 13.75 years. The mean duration of symptoms were 2.5 months and mean duration of follow-up 46.75 (15-102 months. All the patients had frontal headache as a main complaint; 2 patients also had nausea and vomiting; and 1 patient also had numbness on the left side of his body. Three patients had bilateral marked papil edema while 1 patient had no neurological deficit. Cyst was hyperintense and hypointense in cranial computed tomography of 2 and 1 patients, respectively. T1-, and T2-weighted cranial magnetic resonance images were iso-, and hyperintense in 2 patients while hypo-, and hyperintense in 1 patient, while hyper-, and isointense in 1 patient respectively. Interhemispheric-transcallosal-transforaminal approach was used in all patients. In 3 patients, total excision was performed while in 1 patient, a small part of capsule attached to thalamostriate vein was left. There were no cyst recurrences at follow-up.Conclusions: Although various approaches had been described to reach the third ventricular colloid cyst; we preferred the transcallosal approach in all of our pediatric patients since the approach does not cause any cortical breach and provides secure tumour resection. (Journal of Current Pediatrics 2011; 9: 23-7

  6. DARK STARS: IMPROVED MODELS AND FIRST PULSATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Rindler-Daller, T.; Freese, K. [Department of Physics and Michigan Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Montgomery, M. H.; Winget, D. E. [Department of Astronomy, McDonald Observatory and Texas Cosmology Center, University of Texas, Austin, TX 78712 (United States); Paxton, B. [Kavli Insitute for Theoretical Physics, University of California, Santa Barbara, CA 93106 (United States)

    2015-02-01

    We use the stellar evolution code MESA to study dark stars (DSs). DSs, which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the universe. We compute stellar models for accreting DSs with masses up to 10{sup 6} M {sub ☉}. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with previous models, which assumed polytropic interiors. There are some differences in the details, with positive implications for observability. We found that, in the mass range of 10{sup 4}-10{sup 5} M {sub ☉}, our DSs are hotter by a factor of 1.5 than those in Freese et al., are smaller in radius by a factor of 0.6, denser by a factor of three to four, and more luminous by a factor of two. Our models also confirm previous results, according to which supermassive DSs are very well approximated by (n = 3)-polytropes. We also perform a first study of DS pulsations. Our DS models have pulsation modes with timescales ranging from less than a day to more than two years in their rest frames, at z ∼ 15, depending on DM particle mass and overtone number. Such pulsations may someday be used to identify bright, cool objects uniquely as DSs; if properly calibrated, they might, in principle, also supply novel standard candles for cosmological studies.

  7. Compressible Turbulent Channel Flows: DNS Results and Modeling

    Science.gov (United States)

    Huang, P. G.; Coleman, G. N.; Bradshaw, P.; Rai, Man Mohan (Technical Monitor)

    1994-01-01

    The present paper addresses some topical issues in modeling compressible turbulent shear flows. The work is based on direct numerical simulation of two supersonic fully developed channel flows between very cold isothermal walls. Detailed decomposition and analysis of terms appearing in the momentum and energy equations are presented. The simulation results are used to provide insights into differences between conventional time-and Favre-averaging of the mean-flow and turbulent quantities. Study of the turbulence energy budget for the two cases shows that the compressibility effects due to turbulent density and pressure fluctuations are insignificant. In particular, the dilatational dissipation and the mean product of the pressure and dilatation fluctuations are very small, contrary to the results of simulations for sheared homogeneous compressible turbulence and to recent proposals for models for general compressible turbulent flows. This provides a possible explanation of why the Van Driest density-weighted transformation is so successful in correlating compressible boundary layer data. Finally, it is found that the DNS data do not support the strong Reynolds analogy. A more general representation of the analogy is analysed and shown to match the DNS data very well.

  8. Modeling the Relations among Students' Epistemological Beliefs, Motivation, Learning Approach, and Achievement

    Science.gov (United States)

    Kizilgunes, Berna; Tekkaya, Ceren; Sungur, Semra

    2009-01-01

    The authors proposed a model to explain how epistemological beliefs, achievement motivation, and learning approach related to achievement. The authors assumed that epistemological beliefs influence achievement indirectly through their effect on achievement motivation and learning approach. Participants were 1,041 6th-grade students. Results of the…

  9. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    Science.gov (United States)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  10. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David; Thompson, Sandra E.

    2016-09-17

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  11. Advanced targeted, cell and gene therapy approaches for pediatric hematological malignancies: results and future perspectives

    Directory of Open Access Journals (Sweden)

    Chiara Francesca Magnani

    2013-04-01

    Full Text Available Despite the survival of pediatric patients affected by hematological malignancies being improved in the last 20 years by chemotherapy and hematopoietic stem cell transplantation (HSCT, a significant amount of patients still relapses. Treatment intensification is limited by toxic side effects and is constrained by the plateau of efficacy, while the pipeline of new chemotherapeutic drugs is running short. Therefore, novel therapeutic strategies are essential and researchers around the world are testing in clinical trials immune and gene therapy approaches as second-line treatments. The aim of this review is to give a glance at these novel promising strategies of advanced medicine in the field of pediatric leukemias. Results from clinical protocols using new targeted smart drugs, immunotherapy and gene therapy are summarized, and important considerations regarding the combination of these novel approaches with standard treatments to promote safe and long-term cure are discussed.

  12. Cognitive effects of mindfulness training: Results of a pilot study based on a theory driven approach

    Directory of Open Access Journals (Sweden)

    Lena Wimmer

    2016-07-01

    Full Text Available The present paper reports a pilot study which tested cognitive effects of mindfulness practice in a theory-driven approach. Thirty-four fifth graders received either a mindfulness training which was based on the mindfulness-based stress reduction approach (experimental group, a concentration training (active control group or no treatment (passive control group. Based on the operational definition of mindfulness by Bishop et al. (2004, effects on sustained attention, cognitive flexibility, cognitive inhibition and data-driven as opposed to schema-based information processing were predicted. These abilities were assessed in a pre-post design by means of a vigilance test, a reversible figures test, the Wisconsin Card Sorting Test, a Stroop test, a visual search task, and a recognition task of prototypical faces. Results suggest that the mindfulness training specifically improved cognitive inhibition and data-driven information processing.

  13. THE FAIRSHARES MODEL: AN ETHICAL APPROACH TO SOCIAL ENTERPRISE DEVELOPMENT?

    OpenAIRE

    Ridley-Duff, R.

    2015-01-01

    This paper is based on the keynote address to the 14th International Association of Public and Non-Profit Marketing (IAPNM) conference. It explore the question "What impact do ethical values in the FairShares Model have on social entrepreneurial behaviour?" In the first part, three broad approaches to social enterprise are set out: co-operative and mutual enterprises (CMEs), social and responsible businesses (SRBs) and charitable trading activities (CTAs). The ethics that guide each approach ...

  14. A computational language approach to modeling prose recall in schizophrenia.

    Science.gov (United States)

    Rosenstein, Mark; Diaz-Asper, Catherine; Foltz, Peter W; Elvevåg, Brita

    2014-06-01

    Many cortical disorders are associated with memory problems. In schizophrenia, verbal memory deficits are a hallmark feature. However, the exact nature of this deficit remains elusive. Modeling aspects of language features used in memory recall have the potential to provide means for measuring these verbal processes. We employ computational language approaches to assess time-varying semantic and sequential properties of prose recall at various retrieval intervals (immediate, 30 min and 24 h later) in patients with schizophrenia, unaffected siblings and healthy unrelated control participants. First, we model the recall data to quantify the degradation of performance with increasing retrieval interval and the effect of diagnosis (i.e., group membership) on performance. Next we model the human scoring of recall performance using an n-gram language sequence technique, and then with a semantic feature based on Latent Semantic Analysis. These models show that automated analyses of the recalls can produce scores that accurately mimic human scoring. The final analysis addresses the validity of this approach by ascertaining the ability to predict group membership from models built on the two classes of language features. Taken individually, the semantic feature is most predictive, while a model combining the features improves accuracy of group membership prediction slightly above the semantic feature alone as well as over the human rating approach. We discuss the implications for cognitive neuroscience of such a computational approach in exploring the mechanisms of prose recall.

  15. A Neural Model of Face Recognition: a Comprehensive Approach

    Science.gov (United States)

    Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina

    Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.

  16. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  17. A transformation approach for collaboration based requirement models

    CERN Document Server

    Harbouche, Ahmed; Mokhtari, Aicha

    2012-01-01

    Distributed software engineering is widely recognized as a complex task. Among the inherent complexities is the process of obtaining a system design from its global requirement specification. This paper deals with such transformation process and suggests an approach to derive the behavior of a given system components, in the form of distributed Finite State Machines, from the global system requirements, in the form of an augmented UML Activity Diagrams notation. The process of the suggested approach is summarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model), the definition of the target Design Meta-Model and the definition of the rules to govern the transformation during the derivation process. The derivation process transforms the global system requirements described as UML diagram activities (extended with collaborations) to system roles behaviors represented as UML finite state machines. The approach is implemented using Atlas Transformation Language (ATL).

  18. A TRANSFORMATION APPROACH FOR COLLABORATION BASED REQUIREMENT MODELS

    Directory of Open Access Journals (Sweden)

    Ahmed Harbouche

    2012-02-01

    Full Text Available Distributed software engineering is widely recognized as a complex task. Among the inherent complexitiesis the process of obtaining a system design from its global requirement specification. This paper deals withsuch transformation process and suggests an approach to derive the behavior of a given systemcomponents, in the form of distributed Finite State Machines, from the global system requirements, in theform of an augmented UML Activity Diagrams notation. The process of the suggested approach issummarized in three steps: the definition of the appropriate source Meta-Model (requirements Meta-Model, the definition of the target Design Meta-Model and the definition of the rules to govern thetransformation during the derivation process. The derivation process transforms the global systemrequirements described as UML diagram activities (extended with collaborations to system rolesbehaviors represented as UML finite state machines. The approach is implemented using AtlasTransformation Language (ATL.

  19. Functional and cosmetic results of a lower eyelid crease approach for external dacryocystorhinostomy

    Directory of Open Access Journals (Sweden)

    Patricia Mitiko Santello Akaishi

    2011-08-01

    Full Text Available PURPOSE: External dacryocystorhinostomy is routinely performed through a cutaneous vertical incision placed on the lateral aspect of the nose. The lower eyelid crease approach has been seldom reported. The purpose of this study is to report the cosmetic and functional results of the lid crease approach for external dacryocystorhinostomy in a series of patients. METHODS: Prospective, interventional case series. Twenty-five consecutive patients (17 women ranging in age from 3 to 85 years (mean ± SD= 44.84 ± 23.67 were included in the study. All patients but one underwent unilateral external dacryocystorhinostomy with a 10 to 15 mm horizontal incision placed on a subciliary relaxed eyelid tension line. The inner canthus was photographed with a Nikon D70S digital camera with a macrolens and resolution of 3008 x 2000 pixels at 1, 3 and 6 months after surgery. The resulting scar was judged from the photographs by 3 observers (ophthalmologist, plastic and head and neck surgeons according to a four level scale (1= unapparent, 2= minimally visible, 3= moderately visible, 4= very visible. RESULTS: The surgery was easily performed in all patients with a 90.48% success. Three of the elderly patients (ages 61, 79 and 85 yr developed mild lacrimal punctum ectropion, which resolved with conservative treatment. One patient had a hypometric blink which spontaneously recovered within one month. The mean score for scar visibility was 2.19 (1st mo, 1.65 (3th mo and 1.44 (6th mo. CONCLUSIONS: The eyelid crease approach is an excellent option for external dacryocystorhinostomy. It leaves an unapparent scar since the first month after surgery, even in younger patients. The functional results are excellent and comparable to other techniques. Care should be taken in elderly patients with lower eyelid laxity in order to prevent lacrimal punctum ectropion.

  20. Alternative surgical approach for the management of uterine prolapse in young women: preliminary results.

    Science.gov (United States)

    Karatayli, Rengin; Balci, Osman; Gezginç, Kazim; Yildirim, Pinar; Karanfil, Fikriye; Acar, Ali

    2013-10-01

    To demonstrate an alternative surgical approach for the management of uterine prolapse in young women by a technique that was previously defined for post-hysterectomy vaginal vault suspension in published work and also to demonstrate successful operative results. The study population consisted of 12 women aged 28-41 years who had stage 4 uterine prolapse and who were surgically treated by abdominal hysteropexy using autogenous rectus fascia strips. Operative results and postoperative follow-up Pelvic Organ Prolapse Quantification and Prolapse Quality of Life results were recorded. Mean age of patients was 35.5 ± 4.1 years (range, 28-41). Mean parity in the study group was 2.6 ± 1.0 (range, 1-5). Mean operation time was 32.0 ± 5.2 min (range, 25-42). All patients were discharged on the postoperative 3rd day and no complications were observed postoperatively. Mean follow-up period was 20 ± 7.0 months (range, 12-36). All of the patients had complete remission for uterine prolapse and none of the patients had complaints related to the operation. Abdominal hysteropexy operation using rectus fascia strips provides a safe and alternative approach for the management of uterine prolapse in young women who desire to preserve their uterus. But further analysis is needed to confirm our results. © 2013 The Authors. Journal of Obstetrics and Gynaecology Research © 2013 Japan Society of Obstetrics and Gynecology.

  1. An algebraic approach to modeling in software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Loegel, G.J. [Superconducting Super Collider Lab., Dallas, TX (United States)]|[Michigan Univ., Ann Arbor, MI (United States); Ravishankar, C.V. [Michigan Univ., Ann Arbor, MI (United States)

    1993-09-01

    Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ``computer science`` objects like abstract data types, but in practice software errors are often caused because ``real-world`` objects are improperly modeled. There is a large semantic gap between the customer`s objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form.

  2. A mechanism-based approach to modeling ductile fracture.

    Energy Technology Data Exchange (ETDEWEB)

    Bammann, Douglas J.; Hammi, Youssef; Antoun, Bonnie R.; Klein, Patrick A.; Foulk, James W., III; McFadden, Sam X.

    2004-01-01

    Ductile fracture in metals has been observed to result from the nucleation, growth, and coalescence of voids. The evolution of this damage is inherently history dependent, affected by how time-varying stresses drive the formation of defect structures in the material. At some critically damaged state, the softening response of the material leads to strain localization across a surface that, under continued loading, becomes the faces of a crack in the material. Modeling localization of strain requires introduction of a length scale to make the energy dissipated in the localized zone well-defined. In this work, a cohesive zone approach is used to describe the post-bifurcation evolution of material within the localized zone. The relations are developed within a thermodynamically consistent framework that incorporates temperature and rate-dependent evolution relationships motivated by dislocation mechanics. As such, we do not prescribe the evolution of tractions with opening displacements across the localized zone a priori. The evolution of tractions is itself an outcome of the solution of particular, initial boundary value problems. The stress and internal state of the material at the point of bifurcation provides the initial conditions for the subsequent evolution of the cohesive zone. The models we develop are motivated by in-situ scanning electron microscopy of three-point bending experiments using 6061-T6 aluminum and 304L stainless steel, The in situ observations of the initiation and evolution of fracture zones reveal the scale over which the failure mechanisms act. In addition, these observations are essential for motivating the micromechanically-based models of the decohesion process that incorporate the effects of loading mode mixity, temperature, and loading rate. The response of these new cohesive zone relations is demonstrated by modeling the three-point bending configuration used for the experiments. In addition, we survey other methods with the potential

  3. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  4. Teaching Service Modelling to a Mixed Class: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Jeremiah D. DENG

    2015-04-01

    Full Text Available Service modelling has become an increasingly important area in today's telecommunications and information systems practice. We have adapted a Network Design course in order to teach service modelling to a mixed class of both the telecommunication engineering and information systems backgrounds. An integrated approach engaging mathematics teaching with strategies such as problem-solving, visualization, and the use of examples and simulations, has been developed. From assessment on student learning outcomes, it is indicated that the proposed course delivery approach succeeded in bringing out comparable and satisfactory performance from students of different educational backgrounds.

  5. Comparison of blade-strike modeling results with empirical data

    Energy Technology Data Exchange (ETDEWEB)

    Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2004-03-01

    This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.

  6. A Spatial Clustering Approach for Stochastic Fracture Network Modelling

    Science.gov (United States)

    Seifollahi, S.; Dowd, P. A.; Xu, C.; Fadakar, A. Y.

    2014-07-01

    Fracture network modelling plays an important role in many application areas in which the behaviour of a rock mass is of interest. These areas include mining, civil, petroleum, water and environmental engineering and geothermal systems modelling. The aim is to model the fractured rock to assess fluid flow or the stability of rock blocks. One important step in fracture network modelling is to estimate the number of fractures and the properties of individual fractures such as their size and orientation. Due to the lack of data and the complexity of the problem, there are significant uncertainties associated with fracture network modelling in practice. Our primary interest is the modelling of fracture networks in geothermal systems and, in this paper, we propose a general stochastic approach to fracture network modelling for this application. We focus on using the seismic point cloud detected during the fracture stimulation of a hot dry rock reservoir to create an enhanced geothermal system; these seismic points are the conditioning data in the modelling process. The seismic points can be used to estimate the geographical extent of the reservoir, the amount of fracturing and the detailed geometries of fractures within the reservoir. The objective is to determine a fracture model from the conditioning data by minimizing the sum of the distances of the points from the fitted fracture model. Fractures are represented as line segments connecting two points in two-dimensional applications or as ellipses in three-dimensional (3D) cases. The novelty of our model is twofold: (1) it comprises a comprehensive fracture modification scheme based on simulated annealing and (2) it introduces new spatial approaches, a goodness-of-fit measure for the fitted fracture model, a measure for fracture similarity and a clustering technique for proposing a locally optimal solution for fracture parameters. We use a simulated dataset to demonstrate the application of the proposed approach

  7. Position-sensitive transition edge sensor modeling and results

    Energy Technology Data Exchange (ETDEWEB)

    Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline

    2004-03-11

    We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.

  8. Role of numerical scheme choice on the results of mathematical modeling of combustion and detonation

    Science.gov (United States)

    Yakovenko, I. S.; Kiverin, A. D.; Pinevich, S. G.; Ivanov, M. F.

    2016-11-01

    The present study discusses capabilities of dissipation-free CABARET numerical method application to unsteady reactive gasdynamic flows modeling. In framework of present research the method was adopted for reactive flows governed by real gas equation of state and applied for several typical problems of unsteady gas dynamics and combustion modeling such as ignition and detonation initiation by localized energy sources. Solutions were thoroughly analyzed and compared with that derived by using of the modified Euler-Lagrange method of “coarse” particles. Obtained results allowed us to distinguish range of phenomena where artificial effects of numerical approach may counterfeit their physical nature and to develop guidelines for numerical approach selection appropriate for unsteady reactive gasdynamic flows numerical modeling.

  9. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-04-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  10. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  11. Dark Stars: Improved Models and First Pulsation Results

    CERN Document Server

    Rindler-Daller, Tanja; Freese, Katherine; Winget, Donald E; Paxton, Bill

    2014-01-01

    (Abridged) We use the stellar evolution code MESA to study dark stars. Dark stars (DSs), which are powered by dark matter (DM) self-annihilation rather than by nuclear fusion, may be the first stars to form in the Universe. We compute stellar models for accreting DSs with masses up to 10^6 M_sun. While previous calculations were limited to polytropic interiors, our current calculations use MESA, a modern stellar evolution code to solve the equations of stellar structure. The heating due to DM annihilation is self-consistently included, assuming extended adiabatic contraction of DM within the minihalos in which DSs form. We find remarkably good overall agreement with the basic results of previous models. There are some differences, however, in the details, with positive implications for observability of DSs. We found that, in the mass range of 10^4 - 10^5 M_sun, using MESA, our DSs are hotter by a factor of 1.5 than those in Freese et al.(2010), are smaller in radius by a factor of 0.6, denser by a factor of 3...

  12. MODELING RESULTS FROM CESIUM ION EXCHANGE PROCESSING WITH SPHERICAL RESINS

    Energy Technology Data Exchange (ETDEWEB)

    Nash, C.; Hang, T.; Aleman, S.

    2011-01-03

    Ion exchange modeling was conducted at the Savannah River National Laboratory to compare the performance of two organic resins in support of Small Column Ion Exchange (SCIX). In-tank ion exchange (IX) columns are being considered for cesium removal at Hanford and the Savannah River Site (SRS). The spherical forms of resorcinol formaldehyde ion exchange resin (sRF) as well as a hypothetical spherical SuperLig{reg_sign} 644 (SL644) are evaluated for decontamination of dissolved saltcake wastes (supernates). Both SuperLig{reg_sign} and resorcinol formaldehyde resin beds can exhibit hydraulic problems in their granular (nonspherical) forms. SRS waste is generally lower in potassium and organic components than Hanford waste. Using VERSE-LC Version 7.8 along with the cesium Freundlich/Langmuir isotherms to simulate the waste decontamination in ion exchange columns, spherical SL644 was found to reduce column cycling by 50% for high-potassium supernates, but sRF performed equally well for the lowest-potassium feeds. Reduced cycling results in reduction of nitric acid (resin elution) and sodium addition (resin regeneration), therefore, significantly reducing life-cycle operational costs. These findings motivate the development of a spherical form of SL644. This work demonstrates the versatility of the ion exchange modeling to study the effects of resin characteristics on processing cycles, rates, and cold chemical consumption. The value of a resin with increased selectivity for cesium over potassium can be assessed for further development.

  13. Do recommender systems benefit users? a modeling approach

    Science.gov (United States)

    Yeung, Chi Ho

    2016-04-01

    Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.

  14. Toward the design of sustainable biofuel landscapes: A modeling approach

    Science.gov (United States)

    Izaurralde, R. C.; Zhang, X.; Manowitz, D. H.; Sahajpal, R.

    2011-12-01

    Biofuel crops have emerged as promising feedstocks for advanced bioenergy production in the form of cellulosic ethanol and biodiesel. However, large-scale deployment of biofuel crops for energy production has the potential to conflict with food production and generate a myriad of environmental outcomes related to land and water resources (e.g., decreases in soil carbon storage, increased erosion, altered runoff, deterioration in water quality). In order to anticipate the possible impacts of biofuel crop production on food production systems and the environment and contribute to the design of sustainable biofuel landscapes, we developed a spatially-explicit integrated modeling framework (SEIMF) aimed at understanding, among other objectives, the complex interactions among land, water, and energy. The framework is a research effort of the DOE Great Lakes Bioenergy Research Center. The SEIMF has three components: (1) a GIS-based data analysis system, (2) the biogeochemical model EPIC (Environmental Policy Integrated Climate), and (3) an evolutionary multi-objective optimization algorithm for examining trade-offs between biofuel energy production and ecosystem responses. The SEIMF was applied at biorefinery scale to simulate biofuel production scenarios and the yield and environmental results were used to develop trade-offs, economic and life-cycle analyses. The SEIMF approach was also applied to test the hypothesis that growing perennial herbaceous species on marginal lands can satisfy a significant fraction of targeted demands while avoiding competition with food systems and maintaining ecosystem services.

  15. Forecasting wind-driven wildfires using an inverse modelling approach

    Directory of Open Access Journals (Sweden)

    O. Rios

    2014-06-01

    Full Text Available A technology able to rapidly forecast wildfire dynamics would lead to a paradigm shift in the response to emergencies, providing the Fire Service with essential information about the ongoing fire. This paper presents and explores a novel methodology to forecast wildfire dynamics in wind-driven conditions, using real-time data assimilation and inverse modelling. The forecasting algorithm combines Rothermel's rate of spread theory with a perimeter expansion model based on Huygens principle and solves the optimisation problem with a tangent linear approach and forward automatic differentiation. Its potential is investigated using synthetic data and evaluated in different wildfire scenarios. The results show the capacity of the method to quickly predict the location of the fire front with a positive lead time (ahead of the event in the order of 10 min for a spatial scale of 100 m. The greatest strengths of our method are lightness, speed and flexibility. We specifically tailor the forecast to be efficient and computationally cheap so it can be used in mobile systems for field deployment and operativeness. Thus, we put emphasis on producing a positive lead time and the means to maximise it.

  16. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  17. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  18. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  19. An Integrated, Acceptance-Based Behavioral Approach for Depression With Social Anxiety: Preliminary Results.

    Science.gov (United States)

    Dalrymple, Kristy L; Morgan, Theresa A; Lipschitz, Jessica M; Martinez, Jennifer H; Tepe, Elizabeth; Zimmerman, Mark

    2014-07-01

    Depression and social anxiety disorder (SAD) are highly comorbid, resulting in greater severity and functional impairment compared with each disorder alone. Although recently transdiagnostic treatments have been developed, no known treatments have addressed this comorbidity pattern specifically. Preliminary support exists for acceptance-based approaches for depression and SAD separately, and they may be more efficacious for comorbid depression and anxiety compared with traditional cognitive-behavioral approaches. The aim of the current study was to develop and pilot test an integrated acceptance-based behavioral treatment for depression and comorbid SAD. Participants included 38 patients seeking pharmacotherapy at an outpatient psychiatry practice, who received 16 individual sessions of the therapy. Results showed significant improvement in symptoms, functioning, and processes from pre- to post-treatment, as well as high satisfaction with the treatment. These results support the preliminary acceptability, feasibility, and effectiveness of this treatment in a typical outpatient psychiatry practice, and suggest that further research on this treatment in larger randomized trials is warranted. © The Author(s) 2014.

  20. Disclosure of HIV results among discordant couples in Rakai, Uganda: a facilitated couple counselling approach.

    Science.gov (United States)

    Kairania, Robert; Gray, Ronald H; Kiwanuka, Noah; Makumbi, Fredrick; Sewankambo, Nelson K; Serwadda, David; Nalugoda, Fred; Kigozi, Godfrey; Semanda, John; Wawer, Maria J

    2010-09-01

    Disclosure of HIV sero-positive results among HIV-discordant couples in sub-Saharan Africa is generally low. We describe a facilitated couple counselling approach to enhance disclosure among HIV-discordant couples. Using unique identifiers, 293 HIV-discordant couples were identified through retrospective linkage of married or cohabiting consenting adults individually enrolled into a cohort study and into two randomised trials of male circumcision in Rakai, Uganda. HIV-discordant couples and a random sample of HIV-infected concordant and HIV-negative concordant couples (to mask HIV status) were invited to sensitisation meetings to discuss the benefits of disclosure and couple counselling. HIV-infected partners were subsequently contacted to encourage HIV disclosure to their HIV-uninfected partners. If the index positive partner agreed, the counsellor facilitated the disclosure of HIV results, and provided ongoing support. The proportion of disclosure was determined. Eighty-one per cent of HIV-positive partners in discordant relationships disclosed their status to their HIV-uninfected partners in the presence of the counsellor. The rates of disclosure were 81.3% in male HIV-positive and 80.2% in female HIV-positive discordant couples. Disclosure did not vary by age, education or occupation. In summary, disclosure of HIV-positive results in discordant couples using facilitated couple counselling approach is high, but requires a stepwise process of sensitisation and agreement by the infected partner.

  1. Assessing the agricultural costs of climate change: Combining results from crop and economic models

    Science.gov (United States)

    Howitt, R. E.

    2016-12-01

    Any perturbation to a resource system used by humans elicits both technical and behavioral changes. For agricultural production, economic criteria and their associated models are usually good predictors of human behavior in agricultural production. Estimation of the agricultural costs of climate change requires careful downscaling of global climate models to the level of agricultural regions. Plant growth models for the dominant crops are required to accurately show the full range of trade-offs and adaptation mechanisms needed to minimize the cost of climate change. Faced with the shifts in the fundamental resource base of agriculture, human behavior can either exacerbate or offset the impact of climate change on agriculture. In addition, agriculture can be an important source of increased carbon sequestration. However the effectiveness and timing of this sequestration depends on agricultural practices and farmer behavior. Plant growth models and economic models have been shown to interact in two broad fashions. First there is the direct embedding of a parametric representation plant growth simulations in the economic model production function. A second and more general approach is to have plant growth and crop process models interact with economic models as they are simulated. The development of more general wrapper programs that transfer information between models rapidly and efficiently will encourage this approach. However, this method does introduce complications in terms of matching up disparate scales both in time and space between models. Another characteristic behavioral response of agricultural production is the distinction between the intensive margin which considers the quantity of resource, for example fertilizer, used for a given crop, and the extensive margin of adjustment that measures how farmers will adjust their crop proportions in response to climate change. Ideally economic models will measure the response to both these margins of adjustment

  2. Differential GPS/inertial navigation approach/landing flight test results

    Science.gov (United States)

    Snyder, Scott; Schipper, Brian; Vallot, Larry; Parker, Nigel; Spitzer, Cary

    1992-01-01

    Results of a joint Honeywell/NASA-Langley differential GPS/inertial flight test conducted in November 1990 are discussed focusing on postflight data analysis. The test was aimed at acquiring a system performance database and demonstrating automatic landing based on an integrated differential GPS/INS with barometric and radar altimeters. Particular attention is given to characteristics of DGPS/inertial error and the magnitude of the differential corrections and vertical channel performance with and without altimeter augmentation. It is shown that DGPS/inertial integrated with a radar altimeter is capable of providing a precision approach and autoland guidance of manned return space vehicles within the Space Shuttle accuracy requirements.

  3. Approximating model probabilities in Bayesian information criterion and decision-theoretic approaches to model selection in phylogenetics.

    Science.gov (United States)

    Evans, Jason; Sullivan, Jack

    2011-01-01

    A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.

  4. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  5. Ocean Data Assimilation in the Gulf of Mexico Using 3D VAR Approach - Preliminary Results

    Science.gov (United States)

    Paturi, S.; Garraffo, Z. D.; Cummings, J. A.; Rivin, I.; Mehra, A.; Kim, H. C.

    2016-12-01

    Approaches to ocean data assimilation vary widely, both in terms of the sophistication of the method and the observations assimilated.A three-dimensional variational (3DVAR) data assimilation system, part of the Navy Coupled Ocean Data Assimilation (NCODA) system developed at Navy Research Laboratory (NRL), is used for assimilating Sea Surface Temperature (SST) and Sea Surface Height (SSH) in the Gulf of Mexico (GoM). The NCODA 3DVAR produces simultaneous analyses of temperature, salinity, and vector velocity and uses all possible sources of ocean data observations.The Hybrid Coordinate Ocean Model (HYCOM) is used for the simulations, at 1/25o grid resolution for July 2011 period. After successful implementation of NCODA 3DVAR in the GoM, the system will be extended to the global ocean with the intent of making it operational.

  6. A Multiple Model Approach to Modeling Based on Fuzzy Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    冯瑞; 张艳珠; 宋春林; 邵惠鹤

    2003-01-01

    A new multiple models(MM) approach was proposed to model complex industrial process by using Fuzzy Support Vector Machines (F SVMs). By applying the proposed approach to a pH neutralization titration experi-ment, F_SVMs MM not only provides satisfactory approximation and generalization property, but also achieves superior performance to USOCPN multiple modeling method and single modeling method based on standard SVMs.

  7. The Complex Outgassing of Comets and the Resulting Coma, a Direct Simulation Monte-Carlo Approach

    Science.gov (United States)

    Fougere, Nicolas

    During its journey, when a comet gets within a few astronomical units of the Sun, solar heating liberates gases and dust from its icy nucleus forming a rarefied cometary atmosphere, the so-called coma. This tenuous atmosphere can expand to distances up to millions of kilometers representing orders of magnitude larger than the nucleus size. Most of the practical cases of coma studies involve the consideration of rarefied gas flows under non-LTE conditions where the hydrodynamics approach is not valid. Then, the use of kinetic methods is required to properly study the physics of the cometary coma. The Direct Simulation Monte-Carlo (DSMC) method is the method of choice to solve the Boltzmann equation, giving the opportunity to study the cometary atmosphere from the inner coma where collisions dominate and is in thermodynamic equilibrium to the outer coma where densities are lower and free flow conditions are verified. While previous studies of the coma used direct sublimation from the nucleus for spherically symmetric 1D models, or 2D models with a day/night asymmetry, recent observations of comets showed the existence of local small source areas such as jets, and extended sources via sublimating icy grains, that must be included into cometary models for a realistic representation of the physics of the coma. In this work, we present, for the first time, 1D, 2D, and 3D models that can take into account the full effects of conditions with more complex sources of gas with jets and/or icy grains. Moreover, an innovative work in a full 3D description of the cometary coma using a kinetic method with a realistic nucleus and outgassing is demonstrated. While most of the physical models used in this study had already been developed, they are included in one self-consistent coma model for the first time. The inclusion of complex cometary outgassing processes represents the state-of-the-art of cometary coma modeling. This provides invaluable information about the coma by

  8. Software sensors based on the grey-box modelling approach

    DEFF Research Database (Denmark)

    Carstensen, J.; Harremoës, P.; Strube, Rune

    1996-01-01

    In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey-box...... models may contain terms which can be estimated on-line by use of the models and measurements. In this paper, it is demonstrated that many storage basins in sewer systems can be used as an on-line flow measurement provided that the basin is monitored on-line with a level transmitter and that a grey-box...

  9. Environmental Radiation Effects on Mammals A Dynamical Modeling Approach

    CERN Document Server

    Smirnova, Olga A

    2010-01-01

    This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...

  10. Preliminary Results of the first European Source Apportionment intercomparison for Receptor and Chemical Transport Models

    Science.gov (United States)

    Belis, Claudio A.; Pernigotti, Denise; Pirovano, Guido

    2017-04-01

    Source Apportionment (SA) is the identification of ambient air pollution sources and the quantification of their contribution to pollution levels. This task can be accomplished using different approaches: chemical transport models and receptor models. Receptor models are derived from measurements and therefore are considered as a reference for primary sources urban background levels. Chemical transport model have better estimation of the secondary pollutants (inorganic) and are capable to provide gridded results with high time resolution. Assessing the performance of SA model results is essential to guarantee reliable information on source contributions to be used for the reporting to the Commission and in the development of pollution abatement strategies. This is the first intercomparison ever designed to test both receptor oriented models (or receptor models) and chemical transport models (or source oriented models) using a comprehensive method based on model quality indicators and pre-established criteria. The target pollutant of this exercise, organised in the frame of FAIRMODE WG 3, is PM10. Both receptor models and chemical transport models present good performances when evaluated against their respective references. Both types of models demonstrate quite satisfactory capabilities to estimate the yearly source contributions while the estimation of the source contributions at the daily level (time series) is more critical. Chemical transport models showed a tendency to underestimate the contribution of some single sources when compared to receptor models. For receptor models the most critical source category is industry. This is probably due to the variety of single sources with different characteristics that belong to this category. Dust is the most problematic source for Chemical Transport Models, likely due to the poor information about this kind of source in the emission inventories, particularly concerning road dust re-suspension, and consequently the

  11. The standard data model approach to patient record transfer.

    Science.gov (United States)

    Canfield, K; Silva, M; Petrucci, K

    1994-01-01

    This paper develops an approach to electronic data exchange of patient records from Ambulatory Encounter Systems (AESs). This approach assumes that the AES is based upon a standard data model. The data modeling standard used here is IDEFIX for Entity/Relationship (E/R) modeling. Each site that uses a relational database implementation of this standard data model (or a subset of it) can exchange very detailed patient data with other such sites using industry standard tools and without excessive programming efforts. This design is detailed below for a demonstration project between the research-oriented geriatric clinic at the Baltimore Veterans Affairs Medical Center (BVAMC) and the Laboratory for Healthcare Informatics (LHI) at the University of Maryland.

  12. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  13. Model Convolution: A Computational Approach to Digital Image Interpretation

    Science.gov (United States)

    Gardner, Melissa K.; Sprague, Brian L.; Pearson, Chad G.; Cosgrove, Benjamin D.; Bicek, Andrew D.; Bloom, Kerry; Salmon, E. D.

    2010-01-01

    Digital fluorescence microscopy is commonly used to track individual proteins and their dynamics in living cells. However, extracting molecule-specific information from fluorescence images is often limited by the noise and blur intrinsic to the cell and the imaging system. Here we discuss a method called “model-convolution,” which uses experimentally measured noise and blur to simulate the process of imaging fluorescent proteins whose spatial distribution cannot be resolved. We then compare model-convolution to the more standard approach of experimental deconvolution. In some circumstances, standard experimental deconvolution approaches fail to yield the correct underlying fluorophore distribution. In these situations, model-convolution removes the uncertainty associated with deconvolution and therefore allows direct statistical comparison of experimental and theoretical data. Thus, if there are structural constraints on molecular organization, the model-convolution method better utilizes information gathered via fluorescence microscopy, and naturally integrates experiment and theory. PMID:20461132

  14. An Approach to Computer Modeling of Geological Faults in 3D and an Application

    Institute of Scientific and Technical Information of China (English)

    ZHU Liang-feng; HE Zheng; PAN Xin; WU Xin-cai

    2006-01-01

    3D geological modeling, one of the most important applications in geosciences of 3D GIS, forms the basis and is a prerequisite for visualized representation and analysis of 3D geological data. Computer modeling of geological faults in 3D is currently a topical research area. Structural modeling techniques of complex geological entities containing reverse faults are discussed and a series of approaches are proposed. The geological concepts involved in computer modeling and visualization of geological fault in 3D are explained, the type of data of geological faults based on geological exploration is analyzed, and a normative database format for geological faults is designed. Two kinds of modeling approaches for faults are compared: a modeling technique of faults based on stratum recovery and a modeling technique of faults based on interpolation in subareas. A novel approach, called the Unified Modeling Technique for stratum and fault, is presented to solve the puzzling problems of reverse faults, syn-sedimentary faults and faults terminated within geological models. A case study of a fault model of bed rock in the Beijing Olympic Green District is presented in order to show the practical result of this method. The principle and the process of computer modeling of geological faults in 3D are discussed and a series of applied technical proposals established. It strengthens our profound comprehension of geological phenomena and the modeling approach, and establishes the basic techniques of 3D geological modeling for practical applications in the field of geosciences.

  15. Schwinger boson approach to the fully screened Kondo model.

    Science.gov (United States)

    Rech, J; Coleman, P; Zarand, G; Parcollet, O

    2006-01-13

    We apply the Schwinger boson scheme to the fully screened Kondo model and generalize the method to include antiferromagnetic interactions between ions. Our approach captures the Kondo crossover from local moment behavior to a Fermi liquid with a nontrivial Wilson ratio. When applied to the two-impurity model, the mean-field theory describes the "Varma-Jones" quantum phase transition between a valence bond state and a heavy Fermi liquid.

  16. Kallen Lehman approach to 3D Ising model

    Science.gov (United States)

    Canfora, F.

    2007-03-01

    A “Kallen-Lehman” approach to Ising model, inspired by quantum field theory à la Regge, is proposed. The analogy with the Kallen-Lehman representation leads to a formula for the free-energy of the 3D model with few free parameters which could be matched with the numerical data. The possible application of this scheme to the spin glass case is shortly discussed.

  17. Modelling approaches in sedimentology: Introduction to the thematic issue

    Science.gov (United States)

    Joseph, Philippe; Teles, Vanessa; Weill, Pierre

    2016-09-01

    As an introduction to this thematic issue on "Modelling approaches in sedimentology", this paper gives an overview of the workshop held in Paris on 7 November 2013 during the 14th Congress of the French Association of Sedimentologists. A synthesis of the workshop in terms of concepts, spatial and temporal scales, constraining data, and scientific challenges is first presented, then a discussion on the possibility of coupling different models, the industrial needs, and the new potential domains of research is exposed.

  18. Modeling Electronic Circular Dichroism within the Polarizable Embedding Approach

    DEFF Research Database (Denmark)

    Nørby, Morten S; Olsen, Jógvan Magnus Haugaard; Steinmann, Casper

    2017-01-01

    We present a systematic investigation of the key components needed to model single chromophore electronic circular dichroism (ECD) within the polarizable embedding (PE) approach. By relying on accurate forms of the embedding potential, where especially the inclusion of local field effects...... sampling. We show that a significant number of snapshots are needed to avoid artifacts in the calculated electronic circular dichroism parameters due to insufficient configurational sampling, thus highlighting the efficiency of the PE model....

  19. Computational Models of Spreadsheet Development: Basis for Educational Approaches

    CERN Document Server

    Hodnigg, Karin; Mittermeir, Roland T

    2008-01-01

    Among the multiple causes of high error rates in spreadsheets, lack of proper training and of deep understanding of the computational model upon which spreadsheet computations rest might not be the least issue. The paper addresses this problem by presenting a didactical model focussing on cell interaction, thus exceeding the atomicity of cell computations. The approach is motivated by an investigation how different spreadsheet systems handle certain computational issues implied from moving cells, copy-paste operations, or recursion.

  20. Urban Modelling with Typological Approach. Case Study: Merida, Yucatan, Mexico

    Science.gov (United States)

    Rodriguez, A.

    2017-08-01

    In three-dimensional models of urban historical reconstruction, missed contextual architecture faces difficulties because it does not have much written references in contrast to the most important monuments. This is the case of Merida, Yucatan, Mexico during the Colonial Era (1542-1810), which has lost much of its heritage. An alternative to offer a hypothetical view of these elements is a typological - parametric definition that allows a 3D modeling approach to the most common features of this heritage evidence.