Hybrid computer modelling in plasma physics
International Nuclear Information System (INIS)
Hromadka, J; Ibehej, T; Hrach, R
2016-01-01
Our contribution is devoted to development of hybrid modelling techniques. We investigate sheath structures in the vicinity of solids immersed in low temperature argon plasma of different pressures by means of particle and fluid computer models. We discuss the differences in results obtained by these methods and try to propose a way to improve the results of fluid models in the low pressure area. There is a possibility to employ Chapman-Enskog method to find appropriate closure relations of fluid equations in a case when particle distribution function is not Maxwellian. We try to follow this way to enhance fluid model and to use it in hybrid plasma model further. (paper)
Modelling of data uncertainties on hybrid computers
Energy Technology Data Exchange (ETDEWEB)
Schneider, Anke (ed.)
2016-06-15
The codes d{sup 3}f and r{sup 3}t are well established for modelling density-driven flow and nuclide transport in the far field of repositories for hazardous material in deep geological formations. They are applicable in porous media as well as in fractured rock or mudstone, for modelling salt- and heat transport as well as a free groundwater surface. Development of the basic framework of d{sup 3}f and r{sup 3}t had begun more than 20 years ago. Since that time significant advancements took place in the requirements for safety assessment as well as for computer hardware development. The period of safety assessment for a repository of high-level radioactive waste was extended to 1 million years, and the complexity of the models is steadily growing. Concurrently, the demands on accuracy increase. Additionally, model and parameter uncertainties become more and more important for an increased understanding of prediction reliability. All this leads to a growing demand for computational power that requires a considerable software speed-up. An effective way to achieve this is the use of modern, hybrid computer architectures which requires basically the set-up of new data structures and a corresponding code revision but offers a potential speed-up by several orders of magnitude. The original codes d{sup 3}f and r{sup 3}t were applications of the software platform UG /BAS 94/ whose development had begun in the early nineteennineties. However, UG had recently been advanced to the C++ based, substantially revised version UG4 /VOG 13/. To benefit also in the future from state-of-the-art numerical algorithms and to use hybrid computer architectures, the codes d{sup 3}f and r{sup 3}t were transferred to this new code platform. Making use of the fact that coupling between different sets of equations is natively supported in UG4, d{sup 3}f and r{sup 3}t were combined to one conjoint code d{sup 3}f++. A direct estimation of uncertainties for complex groundwater flow models with the
Security in hybrid cloud computing
Koudelka, Ondřej
2016-01-01
This bachelor thesis deals with the area of hybrid cloud computing, specifically with its security. The major aim of the thesis is to analyze and compare the chosen hybrid cloud providers. For the minor aim this thesis compares the security challenges of hybrid cloud as opponent to other deployment models. In order to accomplish said aims, this thesis defines the terms cloud computing and hybrid cloud computing in its theoretical part. Furthermore the security challenges for cloud computing a...
A computational model for lower hybrid current drive
International Nuclear Information System (INIS)
Englade, R.C.; Bonoli, P.T.; Porkolab, M.
1983-01-01
A detailed simulation model for lower hybrid (LH) current drive in toroidal devices is discussed. This model accounts reasonably well for the magnitude of radio frequency (RF) current observed in the PLT and Alcator C devices. It also reproduces the experimental dependencies of RF current generation on toroidal magnetic field and has provided insights about mechanisms which may underlie the observed density limit of current drive. (author)
International Nuclear Information System (INIS)
Sehrawat, Arun; Englert, Berthold-Georg; Zemann, Daniel
2011-01-01
We present a hybrid model of the unitary-evolution-based quantum computation model and the measurement-based quantum computation model. In the hybrid model, part of a quantum circuit is simulated by unitary evolution and the rest by measurements on star graph states, thereby combining the advantages of the two standard quantum computation models. In the hybrid model, a complicated unitary gate under simulation is decomposed in terms of a sequence of single-qubit operations, the controlled-z gates, and multiqubit rotations around the z axis. Every single-qubit and the controlled-z gate are realized by a respective unitary evolution, and every multiqubit rotation is executed by a single measurement on a required star graph state. The classical information processing in our model requires only an information flow vector and propagation matrices. We provide the implementation of multicontrol gates in the hybrid model. They are very useful for implementing Grover's search algorithm, which is studied as an illustrative example.
Hybrid Computational Model for High-Altitude Aeroassist Vehicles, Phase I
National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...
Hyndman, D E
2013-01-01
Analog and Hybrid Computing focuses on the operations of analog and hybrid computers. The book first outlines the history of computing devices that influenced the creation of analog and digital computers. The types of problems to be solved on computers, computing systems, and digital computers are discussed. The text looks at the theory and operation of electronic analog computers, including linear and non-linear computing units and use of analog computers as operational amplifiers. The monograph examines the preparation of problems to be deciphered on computers. Flow diagrams, methods of ampl
Solving Problems in Various Domains by Hybrid Models of High Performance Computations
Directory of Open Access Journals (Sweden)
Yurii Rogozhin
2014-03-01
Full Text Available This work presents a hybrid model of high performance computations. The model is based on membrane system (P~system where some membranes may contain quantum device that is triggered by the data entering the membrane. This model is supposed to take advantages of both biomolecular and quantum paradigms and to overcome some of their inherent limitations. The proposed approach is demonstrated through two selected problems: SAT, and image retrieving.
Modelling the WWER-type reactor dynamics using a hybrid computer. Part 1
International Nuclear Information System (INIS)
Karpeta, C.
Results of simulation studies into reactor and steam generator dynamics of a WWER type power plant are presented. Spatial kinetics of the reactor core is described by a nodal approximation to diffusion equations, xenon poisoning equations and heat transfer equations. The simulation of the reactor model dynamics was performed on a hybrid computer. Models of both a horizontal and a vertical steam generator were developed. The dynamics was investigated over a large range of power by computing the transients on a digital computer. (author)
A Hybrid Verifiable and Delegated Cryptographic Model in Cloud Computing
Directory of Open Access Journals (Sweden)
Jaber Ibrahim Naser
2018-02-01
Full Text Available Access control is very important in cloud data sharing. Especially in the domains like healthcare, it is essential to have access control mechanisms in place for confidentiality and secure data access. Attribute based encryption has been around for many years to secure data and provide controlled access. In this paper, we proposed a framework that supports circuit and attributes based encryption mechanism that involves multiple parties. They are data owner, data user, cloud server and attribute authority. An important feature of the proposed system is the verifiable delegation of the decryption process to cloud server. Data owner encrypts data and delegates decryption process to cloud. Cloud server performs partial decryption and then the final decrypted data are shared for users as per the privileges. Data owner thus reduces computational complexity by delegating decryption process cloud server. We built a prototype application using the Microsoft.NET platform for proof of the concept. The empirical results revealed that there is controlled access with multiple user roles and access control rights for secure and confidential data access in cloud computing.
DEFF Research Database (Denmark)
Mishnaevsky, Leon; Dai, Gaoming
2014-01-01
by using computational micromechanical models. It is shown that while glass/carbon fibers hybrid composites clearly demonstrate higher stiffness and lower weight with increasing the carbon content, they can have lower strength as compared with usual glass fiber polymer composites. Secondary...... nanoreinforcement can drastically increase the fatigue lifetime of composites. Especially, composites with the nanoplatelets localized in the fiber/matrix interface layer (fiber sizing) ensure much higher fatigue lifetime than those with the nanoplatelets in the matrix....
A hybrid model for the computationally-efficient simulation of the cerebellar granular layer
Directory of Open Access Journals (Sweden)
Anna eCattani
2016-04-01
Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.
Applications integration in a hybrid cloud computing environment: modelling and platform
Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang
2013-08-01
With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.
Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed
2017-05-01
Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.
Treatment of early and late reflections in a hybrid computer model for room acoustics
DEFF Research Database (Denmark)
Naylor, Graham
1992-01-01
The ODEON computer model for acoustics in large rooms is intended for use both in design (by predicting room acoustical indices quickly and easily) and in research (by forming the basis of an auralization system and allowing study of various room acoustical phenomena). These conflicting demands...... preclude the use of both ``pure'' image source and ``pure'' particle tracing methods. A hybrid model has been developed, in which rays discover potential image sources up to a specified order. Thereafter, the same ray tracing process is used in a different way to rapidly generate a dense reverberant decay...
Andalam, Sidharta; Ramanna, Harshavardhan; Malik, Avinash; Roop, Parthasarathi; Patel, Nitish; Trew, Mark L
2016-08-01
Virtual heart models have been proposed for closed loop validation of safety-critical embedded medical devices, such as pacemakers. These models must react in real-time to off-the-shelf medical devices. Real-time performance can be obtained by implementing models in computer hardware, and methods of compiling classes of Hybrid Automata (HA) onto FPGA have been developed. Models of ventricular cardiac cell electrophysiology have been described using HA which capture the complex nonlinear behavior of biological systems. However, many models that have been used for closed-loop validation of pacemakers are highly abstract and do not capture important characteristics of the dynamic rate response. We developed a new HA model of cardiac cells which captures dynamic behavior and we implemented the model in hardware. This potentially enables modeling the heart with over 1 million dynamic cells, making the approach ideal for closed loop testing of medical devices.
PWR hybrid computer model for assessing the safety implications of control systems
International Nuclear Information System (INIS)
Smith, O.L.; Renier, J.P.; Difilippo, F.C.; Clapp, N.E.; Sozer, A.; Booth, R.S.; Craddick, W.G.; Morris, D.G.
1986-03-01
The ORNL study of safety-related aspects of nuclear power plant control systems consists of two interrelated tasks: (1) failure mode and effects analysis (FMEA) that identified single and multiple component failures that might lead to significant plant upsets and (2) computer models that used these failures as initial conditions and traced the dynamic impact on the control system and remainder of the plant. This report describes the simulation of Oconee Unit 1, the first plant analyzed. A first-principles, best-estimate model was developed and implemented on a hybrid computer consisting of AD-4 analog and PDP-10 digital machines. Controls were placed primarily on the analog to use its interactive capability to simulate operator action. 48 refs., 138 figs., 15 tabs
DEFF Research Database (Denmark)
Dai, Gaoming; Mishnaevsky, Leon
2015-01-01
The potential of advanced carbon/glass hybrid reinforced composites with secondary carbon nanotube reinforcement for wind energy applications is investigated here with the use of computational experiments. Fatigue behavior of hybrid as well as glass and carbon fiber reinforced composites...... with the secondary CNT reinforcements (especially, aligned tubes) present superior fatigue performances than those without reinforcements, also under combined environmental and cyclic mechanical loading. This effect is stronger for carbon composites, than for hybrid and glass composites....
PWR hybrid computer model for assessing the safety implications of control systems
International Nuclear Information System (INIS)
Smith, O.L.; Booth, R.S.; Clapp, N.E.; DiFilippo, F.C.; Renier, J.P.; Sozer, A.
1985-01-01
The ORNL study of safety-related aspects of control systems consists of two interrelated tasks, (1) a failure mode and effects analysis that, in part, identifies single and multiple component failures that may lead to significant plant upsets, and (2) a hybrid computer model that uses these failures as initial conditions and traces the dynamic impact on the control system and remainder of the plant. The second task is reported here. The initial step in model development was to define a suitable interface between the FMEA and computer simulation tasks. This involved identifying primary plant components that must be simulated in dynamic detail and secondary components that can be treated adequately by the FMEA alone. The FMEA in general explores broader spectra of initiating events that may collapse into a reduced number of computer runs. A portion of the FMEA includes consideration of power supply failures. Consequences of the transients may feedback on the initiating causes, and there may be an interactive relationship between the FMEA and the computer simulation. Since the thrust of this program is to investigate control system behavior, the controls are modeled in detail to accurately reproduce characteristic response under normal and off-normal transients. The balance of the model, including neutronics, thermohydraulics and component submodels, is developed in sufficient detail to provide a suitable support for the control system
Computational lymphatic node models in pediatric and adult hybrid phantoms for radiation dosimetry
International Nuclear Information System (INIS)
Lee, Choonsik; Lamart, Stephanie; Moroz, Brian E
2013-01-01
We developed models of lymphatic nodes for six pediatric and two adult hybrid computational phantoms to calculate the lymphatic node dose estimates from external and internal radiation exposures. We derived the number of lymphatic nodes from the recommendations in International Commission on Radiological Protection (ICRP) Publications 23 and 89 at 16 cluster locations for the lymphatic nodes: extrathoracic, cervical, thoracic (upper and lower), breast (left and right), mesentery (left and right), axillary (left and right), cubital (left and right), inguinal (left and right) and popliteal (left and right), for different ages (newborn, 1-, 5-, 10-, 15-year-old and adult). We modeled each lymphatic node within the voxel format of the hybrid phantoms by assuming that all nodes have identical size derived from published data except narrow cluster sites. The lymph nodes were generated by the following algorithm: (1) selection of the lymph node site among the 16 cluster sites; (2) random sampling of the location of the lymph node within a spherical space centered at the chosen cluster site; (3) creation of the sphere or ovoid of tissue representing the node based on lymphatic node characteristics defined in ICRP Publications 23 and 89. We created lymph nodes until the pre-defined number of lymphatic nodes at the selected cluster site was reached. This algorithm was applied to pediatric (newborn, 1-, 5-and 10-year-old male, and 15-year-old males) and adult male and female ICRP-compliant hybrid phantoms after voxelization. To assess the performance of our models for internal dosimetry, we calculated dose conversion coefficients, called S values, for selected organs and tissues with Iodine-131 distributed in six lymphatic node cluster sites using MCNPX2.6, a well validated Monte Carlo radiation transport code. Our analysis of the calculations indicates that the S values were significantly affected by the location of the lymph node clusters and that the values increased for
A hybrid computational model to explore the topological characteristics of epithelial tissues.
González-Valverde, Ismael; García-Aznar, José Manuel
2017-11-01
Epithelial tissues show a particular topology where cells resemble a polygon-like shape, but some biological processes can alter this tissue topology. During cell proliferation, mitotic cell dilation deforms the tissue and modifies the tissue topology. Additionally, cells are reorganized in the epithelial layer and these rearrangements also alter the polygon distribution. We present here a computer-based hybrid framework focused on the simulation of epithelial layer dynamics that combines discrete and continuum numerical models. In this framework, we consider topological and mechanical aspects of the epithelial tissue. Individual cells in the tissue are simulated by an off-lattice agent-based model, which keeps the information of each cell. In addition, we model the cell-cell interaction forces and the cell cycle. Otherwise, we simulate the passive mechanical behaviour of the cell monolayer using a material that approximates the mechanical properties of the cell. This continuum approach is solved by the finite element method, which uses a dynamic mesh generated by the triangulation of cell polygons. Forces generated by cell-cell interaction in the agent-based model are also applied on the finite element mesh. Cell movement in the agent-based model is driven by the displacements obtained from the deformed finite element mesh of the continuum mechanical approach. We successfully compare the results of our simulations with some experiments about the topology of proliferating epithelial tissues in Drosophila. Our framework is able to model the emergent behaviour of the cell monolayer that is due to local cell-cell interactions, which have a direct influence on the dynamics of the epithelial tissue. Copyright © 2017 John Wiley & Sons, Ltd.
Mora Cordova, Angel
2018-01-30
One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite\\'s conductivity based on these parameters.
Mora, A.; Han, F.; Lubineau, G.
2018-04-01
One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in the remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite’s conductivity based on these parameters.
Mora Cordova, Angel; Han, Fei; Lubineau, Gilles
2018-01-01
One strategy to ensure that nanofiller networks in a polymer composite percolate at low volume fractions is to promote segregation. In a segregated structure, the concentration of nanofillers is kept low in some regions of the sample. In turn, the concentration in remaining regions is much higher than the average concentration of the sample. This selective placement of the nanofillers ensures percolation at low average concentration. One original strategy to promote segregation is by tuning the shape of the nanofillers. We use a computational approach to study the conductive networks formed by hybrid particles obtained by growing carbon nanotubes (CNTs) on graphene nanoplatelets (GNPs). The objective of this study is (1) to show that the higher electrical conductivity of these composites is due to the hybrid particles forming a segregated structure and (2) to understand which parameters defining the hybrid particles determine the efficiency of the segregation. We construct a microstructure to observe the conducting paths and determine whether a segregated structure has indeed been formed inside the composite. A measure of efficiency is presented based on the fraction of nanofillers that contribute to the conductive network. Then, the efficiency of the hybrid-particle networks is compared to those of three other networks of carbon-based nanofillers in which no hybrid particles are used: only CNTs, only GNPs, and a mix of CNTs and GNPs. Finally, some parameters of the hybrid particle are studied: the CNT density on the GNPs, and the CNT and GNP geometries. We also present recommendations for the further improvement of a composite's conductivity based on these parameters.
Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models
International Nuclear Information System (INIS)
Lee, Choonsik; Lodwick, Daniel; Hasenauer, Deanna; Williams, Jonathan L; Lee, Choonik; Bolch, Wesley E
2007-01-01
Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images-the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid
Directory of Open Access Journals (Sweden)
Simeone Marino
2016-10-01
Full Text Available Tuberculosis (TB is a world-wide health problem with approximately 2 billion people infected with Mycobacterium tuberculosis (Mtb, the causative bacterium of TB. The pathologic hallmark of Mtb infection in humans and Non-Human Primates (NHPs is the formation of spherical structures, primarily in lungs, called granulomas. Infection occurs after inhalation of bacteria into lungs, where resident antigen-presenting cells (APCs, take up bacteria and initiate the immune response to Mtb infection. APCs traffic from the site of infection (lung to lung-draining lymph nodes (LNs where they prime T cells to recognize Mtb. These T cells, circulating back through blood, migrate back to lungs to perform their immune effector functions. We have previously developed a hybrid agent-based model (ABM, labeled GranSim describing in silico immune cell, bacterial (Mtb and molecular behaviors during tuberculosis infection and recently linked that model to operate across three physiological compartments: lung (infection site where granulomas form, lung draining lymph node (LN, site of generation of adaptive immunity and blood (a measurable compartment. Granuloma formation and function is captured by a spatio-temporal model (i.e., ABM, while LN and blood compartments represent temporal dynamics of the whole body in response to infection and are captured with ordinary differential equations (ODEs. In order to have a more mechanistic representation of APC trafficking from the lung to the lymph node, and to better capture antigen presentation in a draining LN, this current study incorporates the role of dendritic cells (DCs in a computational fashion into GranSim. Results: The model was calibrated using experimental data from the lungs and blood of NHPs. The addition of DCs allowed us to investigate in greater detail mechanisms of recruitment, trafficking and antigen presentation and their role in tuberculosis infection. Conclusion: The main conclusion of this study is
International Nuclear Information System (INIS)
Gritli, Hassène; Belghith, Safya
2015-01-01
Highlights: • A numerical calculation method of the Lyapunov exponents in the compass-gait model under OGY control is proposed. • A new linearization method of the impulsive hybrid dynamics around a one-periodic hybrid limit cycle is achieved. • We develop a simple analytical expression of a controlled hybrid Poincaré map. • A dimension reduction of the hybrid Poincaré map is realized. • We describe the numerical computation procedure of the Lyapunov exponents via the designed hybrid Poincaré map. - Abstract: This paper aims at providing a numerical calculation method of the spectrum of Lyapunov exponents in a four-dimensional impulsive hybrid nonlinear dynamics of a passive compass-gait model under the OGY control approach by means of a controlled hybrid Poincaré map. We present a four-dimensional simplified analytical expression of such hybrid map obtained by linearizing the uncontrolled impulsive hybrid nonlinear dynamics around a desired one-periodic passive hybrid limit cycle. In order to compute the spectrum of Lyapunov exponents, a dimension reduction of the controlled hybrid Poincaré map is realized. The numerical calculation of the spectrum of Lyapunov exponents using the reduced-dimension controlled hybrid Poincaré map is given in detail. In order to show the effectiveness of the developed method, the spectrum of Lyapunov exponents is calculated as the slope (bifurcation) parameter varies and hence used to predict the walking dynamics behavior of the compass-gait model under the OGY control.
Hybrid computing - Generalities and bibliography
International Nuclear Information System (INIS)
Neel, Daniele
1970-01-01
This note presents the content of a research thesis. It describes the evolution of hybrid computing systems, discusses the benefits and shortcomings of analogue or hybrid systems, discusses the building up of an hybrid system (requires properties), comments different possible uses, addresses the issues of language and programming, discusses analysis methods and scopes of application. An appendix proposes a bibliography on these issues and notably the different scopes of application (simulation, fluid dynamics, biology, chemistry, electronics, energy, errors, space, programming languages, hardware, mechanics, and optimisation of equations or processes, physics) [fr
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Energy Technology Data Exchange (ETDEWEB)
Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
International Nuclear Information System (INIS)
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei
2015-01-01
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm
Hybrid Computational Model for High-Altitude Aeroassist Vehicles, Phase II
National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...
Energy Technology Data Exchange (ETDEWEB)
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
A hybrid model for computing nonthermal ion distributions in a long mean-free-path plasma
Tang, Xianzhu; McDevitt, Chris; Guo, Zehua; Berk, Herb
2014-10-01
Non-thermal ions, especially the suprathermal ones, are known to make a dominant contribution to a number of important physics such as the fusion reactivity in controlled fusion, the ion heat flux, and in the case of a tokamak, the ion bootstrap current. Evaluating the deviation from a local Maxwellian distribution of these non-thermal ions can be a challenging task in the context of a global plasma fluid model that evolves the plasma density, flow, and temperature. Here we describe a hybrid model for coupling such constrained kinetic calculation to global plasma fluid models. The key ingredient is a non-perturbative treatment of the tail ions where the ion Knudsen number approaches or surpasses order unity. This can be sharply constrasted with the standard Chapman-Enskog approach which relies on a perturbative treatment that is frequently invalidated. The accuracy of our coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space. Although our specific application examples will be drawn from laboratory controlled fusion experiments, the general approach is applicable to space and astrophysical plasmas as well. Work supported by DOE.
Adaptation and hybridization in computational intelligence
Jr, Iztok
2015-01-01
This carefully edited book takes a walk through recent advances in adaptation and hybridization in the Computational Intelligence (CI) domain. It consists of ten chapters that are divided into three parts. The first part illustrates background information and provides some theoretical foundation tackling the CI domain, the second part deals with the adaptation in CI algorithms, while the third part focuses on the hybridization in CI. This book can serve as an ideal reference for researchers and students of computer science, electrical and civil engineering, economy, and natural sciences that are confronted with solving the optimization, modeling and simulation problems. It covers the recent advances in CI that encompass Nature-inspired algorithms, like Artificial Neural networks, Evolutionary Algorithms and Swarm Intelligence –based algorithms.
Heterotic computing: exploiting hybrid computational devices.
Kendon, Viv; Sebald, Angelika; Stepney, Susan
2015-07-28
Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Computer Modeling of Radiative Transfer in Hybrid-Stabilized Argon–Water Electric Arc
Czech Academy of Sciences Publication Activity Database
Jeništa, Jiří; Takana, H.; Nishiyama, H.; Křenek, Petr; Bartlová, M.; Aubrecht, V.
2011-01-01
Roč. 39, č. 11 (2011), s. 2892-2893 ISSN 0093-3813 Institutional research plan: CEZ:AV0Z20430508 Keywords : Divergence of radiation flux * hybrid-stabilized electric arc * mass flow rate * partial characteristics * radiation flux Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.174, year: 2011 http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=27
Hybrid soft computing approaches research and applications
Dutta, Paramartha; Chakraborty, Susanta
2016-01-01
The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by ParaOptiMUSIG activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis, (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.
HTTR plant dynamic simulation using a hybrid computer
International Nuclear Information System (INIS)
Shimazaki, Junya; Suzuki, Katsuo; Nabeshima, Kunihiko; Watanabe, Koichi; Shinohara, Yoshikuni; Nakagawa, Shigeaki.
1990-01-01
A plant dynamic simulation of High-Temperature Engineering Test Reactor has been made using a new-type hybrid computer. This report describes a dynamic simulation model of HTTR, a hybrid simulation method for SIMSTAR and some results obtained from dynamics analysis of HTTR simulation. It concludes that the hybrid plant simulation is useful for on-line simulation on account of its capability of computation at high speed, compared with that of all digital computer simulation. With sufficient accuracy, 40 times faster computation than real time was reached only by changing an analog time scale for HTTR simulation. (author)
Accelerating Climate Simulations Through Hybrid Computing
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Checkpointing for a hybrid computing node
Cher, Chen-Yong
2016-03-08
According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.
Mota, J.P.B.; Esteves, I.A.A.C.; Rostam-Abadi, M.
2004-01-01
A computational fluid dynamics (CFD) software package has been coupled with the dynamic process simulator of an adsorption storage tank for methane fuelled vehicles. The two solvers run as independent processes and handle non-overlapping portions of the computational domain. The codes exchange data on the boundary interface of the two domains to ensure continuity of the solution and of its gradient. A software interface was developed to dynamically suspend and activate each process as necessary, and be responsible for data exchange and process synchronization. This hybrid computational tool has been successfully employed to accurately simulate the discharge of a new tank design and evaluate its performance. The case study presented here shows that CFD and process simulation are highly complementary computational tools, and that there are clear benefits to be gained from a close integration of the two. ?? 2004 Elsevier Ltd. All rights reserved.
Reactor systems modeling for ICF hybrids
International Nuclear Information System (INIS)
Berwald, D.H.; Meier, W.R.
1980-10-01
The computational models of ICF reactor subsystems developed by LLNL and TRW are described and a computer program was incorporated for use in the EPRI-sponsored Feasibility Assessment of Fusion-Fission Hybrids. Representative parametric variations have been examined. Many of the ICF subsystem models are very preliminary and more quantitative models need to be developed and included in the code
Generalised Computability and Applications to Hybrid Systems
DEFF Research Database (Denmark)
Korovina, Margarita V.; Kudinov, Oleg V.
2001-01-01
We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...... propose an interesting application to formalisation of hybrid systems. We obtain some class of hybrid systems, which trajectories are computable in the sense of computable analysis. This research was supported in part by the RFBR (grants N 99-01-00485, N 00-01- 00810) and by the Siberian Branch of RAS (a...... grant for young researchers, 2000)...
Model Reduction of Hybrid Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
gramians. Generalized gramians are the solutions to the observability and controllability Lyapunov inequalities. In the first framework the projection matrices are found based on the common generalized gramians. This framework preserves the stability of the original switched system for all switching...... is guaranteed to be preserved for arbitrary switching signal. To compute the common generalized gramians linear matrix inequalities (LMI’s) need to be solved. These LMI’s are not always feasible. In order to solve the problem of conservatism, the second framework is presented. In this method the projection......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...
Universal blind quantum computation for hybrid system
Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang
2017-08-01
As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.
Hybrid rocket engine, theoretical model and experiment
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
Evaporator modeling - A hybrid approach
International Nuclear Information System (INIS)
Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun
2009-01-01
In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis
Modelling dependable systems using hybrid Bayesian networks
International Nuclear Information System (INIS)
Neil, Martin; Tailor, Manesh; Marquez, David; Fenton, Norman; Hearty, Peter
2008-01-01
A hybrid Bayesian network (BN) is one that incorporates both discrete and continuous nodes. In our extensive applications of BNs for system dependability assessment, the models are invariably hybrid and the need for efficient and accurate computation is paramount. We apply a new iterative algorithm that efficiently combines dynamic discretisation with robust propagation algorithms on junction tree structures to perform inference in hybrid BNs. We illustrate its use in the field of dependability with two example of reliability estimation. Firstly we estimate the reliability of a simple single system and next we implement a hierarchical Bayesian model. In the hierarchical model we compute the reliability of two unknown subsystems from data collected on historically similar subsystems and then input the result into a reliability block model to compute system level reliability. We conclude that dynamic discretisation can be used as an alternative to analytical or Monte Carlo methods with high precision and can be applied to a wide range of dependability problems
den Harder, Annemarie M; Willemink, Martin J; van Hamersvelt, Robbert W; Vonken, Evertjan P A; Schilham, Arnold M R; Lammers, Jan-Willem J; Luijk, Bart; Budde, Ricardo P J; Leiner, Tim; de Jong, Pim A
2016-01-01
The aim of the study was to determine the effects of dose reduction and iterative reconstruction (IR) on pulmonary nodule volumetry. In this prospective study, 25 patients scheduled for follow-up of pulmonary nodules were included. Computed tomography acquisitions were acquired at 4 dose levels with a median of 2.1, 1.2, 0.8, and 0.6 mSv. Data were reconstructed with filtered back projection (FBP), hybrid IR, and model-based IR. Volumetry was performed using semiautomatic software. At the highest dose level, more than 91% (34/37) of the nodules could be segmented, and at the lowest dose level, this was more than 83%. Thirty-three nodules were included for further analysis. Filtered back projection and hybrid IR did not lead to significant differences, whereas model-based IR resulted in lower volume measurements with a maximum difference of -11% compared with FBP at routine dose. Pulmonary nodule volumetry can be accurately performed at a submillisievert dose with both FBP and hybrid IR.
International Nuclear Information System (INIS)
Ivascu, M.
1983-10-01
Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations
Boulanger, Eliot; Thiel, Walter
2012-11-13
Accurate quantum mechanical/molecular mechanical (QM/MM) treatments should account for MM polarization and properly include long-range electrostatic interactions. We report on a development that covers both these aspects. Our approach combines the classical Drude oscillator (DO) model for the electronic polarizability of the MM atoms with the generalized solvent boundary Potential (GSBP) and the solvated macromolecule boundary potential (SMBP). These boundary potentials (BP) are designed to capture the long-range effects of the outer region of a large system on its interior. They employ a finite difference approximation to the Poisson-Boltzmann equation for computing electrostatic interactions and take into account outer-region bulk solvent through a polarizable dielectric continuum (PDC). This approach thus leads to fully polarizable three-layer QM/MM-DO/BP methods. As the mutual responses of each of the subsystems have to be taken into account, we propose efficient schemes to converge the polarization of each layer simultaneously. For molecular dynamics (MD) simulations using GSBP, this is achieved by considering the MM polarizable model as a dynamical degree of freedom, and hence contributions from the boundary potential can be evaluated for a frozen state of polarization at every time step. For geometry optimizations using SMBP, we propose a dual self-consistent field approach for relaxing the Drude oscillators to their ideal positions and converging the QM wave function with the proper boundary potential. The chosen coupling schemes are evaluated with a test system consisting of a glycine molecule in a water ball. Both boundary potentials are capable of properly reproducing the gradients at the inner-region atoms and the Drude oscillators. We show that the effect of the Drude oscillators must be included in all terms of the boundary potentials to obtain accurate results and that the use of a high dielectric constant for the PDC does not lead to a polarization
Computational Modeling | Bioenergy | NREL
cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the
Hybrid parallel computing architecture for multiview phase shifting
Zhong, Kai; Li, Zhongwei; Zhou, Xiaohui; Shi, Yusheng; Wang, Congjun
2014-11-01
The multiview phase-shifting method shows its powerful capability in achieving high resolution three-dimensional (3-D) shape measurement. Unfortunately, this ability results in very high computation costs and 3-D computations have to be processed offline. To realize real-time 3-D shape measurement, a hybrid parallel computing architecture is proposed for multiview phase shifting. In this architecture, the central processing unit can co-operate with the graphic processing unit (GPU) to achieve hybrid parallel computing. The high computation cost procedures, including lens distortion rectification, phase computation, correspondence, and 3-D reconstruction, are implemented in GPU, and a three-layer kernel function model is designed to simultaneously realize coarse-grained and fine-grained paralleling computing. Experimental results verify that the developed system can perform 50 fps (frame per second) real-time 3-D measurement with 260 K 3-D points per frame. A speedup of up to 180 times is obtained for the performance of the proposed technique using a NVIDIA GT560Ti graphics card rather than a sequential C in a 3.4 GHZ Inter Core i7 3770.
Accelerating Climate and Weather Simulations through Hybrid Computing
Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark
2011-01-01
Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.
Hybrid epidemics--a case study on computer worm conficker.
Directory of Open Access Journals (Sweden)
Changwang Zhang
Full Text Available Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols.
Hybrid epidemics--a case study on computer worm conficker.
Zhang, Changwang; Zhou, Shi; Chain, Benjamin M
2015-01-01
Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm's effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols.
Hybrid Epidemics—A Case Study on Computer Worm Conficker
Zhang, Changwang; Zhou, Shi; Chain, Benjamin M.
2015-01-01
Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm’s spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore the tradeoff between spreading modes in determining the worm’s effectiveness. Our results show that the Conficker epidemic is an example of a critically hybrid epidemic, in which the different modes of spreading in isolation do not lead to successful epidemics. Such hybrid spreading strategies may be used beneficially to provide the most effective strategies for promulgating information across a large population. When used maliciously, however, they can present a dangerous challenge to current internet security protocols. PMID:25978309
Corley, R A; Minard, K R; Kabilan, S; Einstein, D R; Kuprat, A P; Harkema, J R; Kimbell, J S; Gargas, M L; Kinzell, John H
2009-05-01
The percentages of total airflows over the nasal respiratory and olfactory epithelium of female rabbits were calculated from computational fluid dynamics (CFD) simulations of steady-state inhalation. These airflow calculations, along with nasal airway geometry determinations, are critical parameters for hybrid CFD/physiologically based pharmacokinetic models that describe the nasal dosimetry of water-soluble or reactive gases and vapors in rabbits. CFD simulations were based upon three-dimensional computational meshes derived from magnetic resonance images of three adult female New Zealand White (NZW) rabbits. In the anterior portion of the nose, the maxillary turbinates of rabbits are considerably more complex than comparable regions in rats, mice, monkeys, or humans. This leads to a greater surface area to volume ratio in this region and thus the potential for increased extraction of water soluble or reactive gases and vapors in the anterior portion of the nose compared to many other species. Although there was considerable interanimal variability in the fine structures of the nasal turbinates and airflows in the anterior portions of the nose, there was remarkable consistency between rabbits in the percentage of total inspired airflows that reached the ethmoid turbinate region (approximately 50%) that is presumably lined with olfactory epithelium. These latter results (airflows reaching the ethmoid turbinate region) were higher than previous published estimates for the male F344 rat (19%) and human (7%). These differences in regional airflows can have significant implications in interspecies extrapolations of nasal dosimetry.
Compositional Modelling of Stochastic Hybrid Systems
Strubbe, S.N.
2005-01-01
In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete
Hybrid soft computing systems for electromyographic signals analysis: a review
2014-01-01
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979
Hybrid soft computing systems for electromyographic signals analysis: a review.
Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates
2014-02-03
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.
Hybrid dynamics for currency modeling
Theodosopoulos, Ted; Trifunovic, Alex
2006-01-01
We present a simple hybrid dynamical model as a tool to investigate behavioral strategies based on trend following. The multiplicative symbolic dynamics are generated using a lognormal diffusion model for the at-the-money implied volatility term structure. Thus, are model exploits information from derivative markets to obtain qualititative properties of the return distribution for the underlier. We apply our model to the JPY-USD exchange rate and the corresponding 1mo., 3mo., 6mo. and 1yr. im...
Hybrid2 - The hybrid power system simulation model
Energy Technology Data Exchange (ETDEWEB)
Baring-Gould, E.I.; Green, H.J.; Dijk, V.A.P. van [National Renewable Energy Lab., Golden, CO (United States); Manwell, J.F. [Univ. of Massachusetts, Amherst, MA (United States)
1996-12-31
There is a large-scale need and desire for energy in remote communities, especially in the developing world; however the lack of a user friendly, flexible performance prediction model for hybrid power systems incorporating renewables hindered the analysis of hybrids as options to conventional solutions. A user friendly model was needed with the versatility to simulate the many system locations, widely varying hardware configurations, and differing control options for potential hybrid power systems. To meet these ends, researchers from the National Renewable Energy Laboratory (NREL) and the University of Massachusetts (UMass) developed the Hybrid2 software. This paper provides an overview of the capabilities, features, and functionality of the Hybrid2 code, discusses its validation and future plans. Model availability and technical support provided to Hybrid2 users are also discussed. 12 refs., 3 figs., 4 tabs.
Exploratory Topology Modelling of Form-Active Hybrid Structures
DEFF Research Database (Denmark)
Holden Deleuran, Anders; Pauly, Mark; Tamke, Martin
2016-01-01
The development of novel form-active hybrid structures (FAHS) is impeded by a lack of modelling tools that allow for exploratory topology modelling of shaped assemblies. We present a flexible and real-time computational design modelling pipeline developed for the exploratory modelling of FAHS...... that enables designers and engineers to iteratively construct and manipulate form-active hybrid assembly topology on the fly. The pipeline implements Kangaroo2's projection-based methods for modelling hybrid structures consisting of slender beams and cable networks. A selection of design modelling sketches...
Energy efficient hybrid computing systems using spin devices
Sharad, Mrigank
Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.
Plasticity: modeling & computation
National Research Council Canada - National Science Library
Borja, Ronaldo Israel
2013-01-01
.... "Plasticity Modeling & Computation" is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids...
Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry
Xie, Tianwu; Kuster, Niels; Zaidi, Habib
2017-04-01
Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.
Hybrid Model of Content Extraction
DEFF Research Database (Denmark)
Qureshi, Pir Abdul Rasool; Memon, Nasrullah
2012-01-01
We present a hybrid model for content extraction from HTML documents. The model operates on Document Object Model (DOM) tree of the corresponding HTML document. It evaluates each tree node and associated statistical features like link density and text distribution across the node to predict...... significance of the node towards overall content provided by the document. Once significance of the nodes is determined, the formatting characteristics like fonts, styles and the position of the nodes are evaluated to identify the nodes with similar formatting as compared to the significant nodes. The proposed...
Computing all hybridization networks for multiple binary phylogenetic input trees.
Albrecht, Benjamin
2015-07-30
The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.
Computational neurogenetic modeling
Benuskova, Lubica
2010-01-01
Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol
International Nuclear Information System (INIS)
Rusinowski, Henryk; Stanek, Wojciech
2010-01-01
In the case of big energy boilers energy efficiency is usually determined with the application of the indirect method. Flue gas losses and unburnt combustible losses have a significant influence on the boiler's efficiency. To estimate these losses the knowledge of the operating parameters influence on the flue gases temperature and the content of combustible particles in the solid combustion products is necessary. A hybrid model of a boiler developed with the application of both analytical modelling and artificial intelligence is described. The analytical part of the model includes the balance equations. The empirical models express the dependence of the flue gas temperature and the mass fraction of the unburnt combustibles in solid combustion products on the operating parameters of a boiler. The empirical models have been worked out by means of neural and regression modelling.
Mathematical Modeling of Hybrid Electrical Engineering Systems
Directory of Open Access Journals (Sweden)
A. A. Lobaty
2016-01-01
Full Text Available A large class of systems that have found application in various industries and households, electrified transportation facilities and energy sector has been classified as electrical engineering systems. Their characteristic feature is a combination of continuous and discontinuous modes of operation, which is reflected in the appearance of a relatively new term “hybrid systems”. A wide class of hybrid systems is pulsed DC converters operating in a pulse width modulation, which are non-linear systems with variable structure. Using various methods for linearization it is possible to obtain linear mathematical models that rather accurately simulate behavior of such systems. However, the presence in the mathematical models of exponential nonlinearities creates considerable difficulties in the implementation of digital hardware. The solution can be found while using an approximation of exponential functions by polynomials of the first order, that, however, violates the rigor accordance of the analytical model with characteristics of a real object. There are two practical approaches to synthesize algorithms for control of hybrid systems. The first approach is based on the representation of the whole system by a discrete model which is described by difference equations that makes it possible to synthesize discrete algorithms. The second approach is based on description of the system by differential equations. The equations describe synthesis of continuous algorithms and their further implementation in a digital computer included in the control loop system. The paper considers modeling of a hybrid electrical engineering system using differential equations. Neglecting the pulse duration, it has been proposed to describe behavior of vector components in phase coordinates of the hybrid system by stochastic differential equations containing generally non-linear differentiable random functions. A stochastic vector-matrix equation describing dynamics of the
Deriving simulators for hybrid Chi models
Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.
2006-01-01
The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an
Nuclear Hybrid Energy System Modeling: RELAP5 Dynamic Coupling Capabilities
Energy Technology Data Exchange (ETDEWEB)
Piyush Sabharwall; Nolan Anderson; Haihua Zhao; Shannon Bragg-Sitton; George Mesina
2012-09-01
The nuclear hybrid energy systems (NHES) research team is currently developing a dynamic simulation of an integrated hybrid energy system. A detailed simulation of proposed NHES architectures will allow initial computational demonstration of a tightly coupled NHES to identify key reactor subsystem requirements, identify candidate reactor technologies for a hybrid system, and identify key challenges to operation of the coupled system. This work will provide a baseline for later coupling of design-specific reactor models through industry collaboration. The modeling capability addressed in this report focuses on the reactor subsystem simulation.
Hybrid model for simulation of plasma jet injection in tokamak
Galkin, Sergei A.; Bogatu, I. N.
2016-10-01
Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.
The IceCube Computing Infrastructure Model
CERN. Geneva
2012-01-01
Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.
Ale, Angelique; Ermolayev, Vladimir; Deliolanis, Nikolaos C; Ntziachristos, Vasilis
2013-05-01
The ability to visualize early stage lung cancer is important in the study of biomarkers and targeting agents that could lead to earlier diagnosis. The recent development of hybrid free-space 360-deg fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) imaging yields a superior optical imaging modality for three-dimensional small animal fluorescence imaging over stand-alone optical systems. Imaging accuracy was improved by using XCT information in the fluorescence reconstruction method. Despite this progress, the detection sensitivity of targeted fluorescence agents remains limited by nonspecific background accumulation of the fluorochrome employed, which complicates early detection of murine cancers. Therefore we examine whether x-ray CT information and bulk fluorescence detection can be combined to increase detection sensitivity. Correspondingly, we research the performance of a data-driven fluorescence background estimator employed for subtraction of background fluorescence from acquisition data. Using mice containing known fluorochromes ex vivo, we demonstrate the reduction of background signals from reconstructed images and sensitivity improvements. Finally, by applying the method to in vivo data from K-ras transgenic mice developing lung cancer, we find small tumors at an early stage compared with reconstructions performed using raw data. We conclude with the benefits of employing fluorescence subtraction in hybrid FMT-XCT for early detection studies.
International Nuclear Information System (INIS)
Bonacorsi, D.
2007-01-01
The CMS experiment at LHC has developed a baseline Computing Model addressing the needs of a computing system capable to operate in the first years of LHC running. It is focused on a data model with heavy streaming at the raw data level based on trigger, and on the achievement of the maximum flexibility in the use of distributed computing resources. The CMS distributed Computing Model includes a Tier-0 centre at CERN, a CMS Analysis Facility at CERN, several Tier-1 centres located at large regional computing centres, and many Tier-2 centres worldwide. The workflows have been identified, along with a baseline architecture for the data management infrastructure. This model is also being tested in Grid Service Challenges of increasing complexity, coordinated with the Worldwide LHC Computing Grid community
A hybrid computer simulation of reactor spatial dynamics
International Nuclear Information System (INIS)
Hinds, H.W.
1977-08-01
The partial differential equations describing the one-speed spatial dynamics of thermal neutron reactors were converted to a set of ordinary differential equations, using finite-difference approximations for the spatial derivatives. The variables were then normalized to a steady-state reference condition in a novel manner, to yield an equation set particularly suitable for implementation on a hybrid computer. One Applied Dynamics AD/FIVE analog-computer console is capable of solving, all in parallel, up to 30 simultaneous differential equations. This corresponds roughly to eight reactor nodes, each with two active delayed-neutron groups. To improve accuracy, an increase in the number of nodes is usually required. Using the Hsu-Howe multiplexing technique, an 8-node, one-dimensional module was switched back and forth between the left and right halves of the reactor, to simulate a 16-node model, also in one dimension. These two versions (8 or 16 nodes) of the model were tested on benchmark problems of the loss-of-coolant type, which were also solved using the digital code FORSIM, with two energy groups and 26 nodes. Good agreement was obtained between the two solution techniques. (author)
Computational analysis on plug-in hybrid electric motorcycle chassis
Teoh, S. J.; Bakar, R. A.; Gan, L. M.
2013-12-01
Plug-in hybrid electric motorcycle (PHEM) is an alternative to promote sustainability lower emissions. However, the PHEM overall system packaging is constrained by limited space in a motorcycle chassis. In this paper, a chassis applying the concept of a Chopper is analysed to apply in PHEM. The chassis 3dimensional (3D) modelling is built with CAD software. The PHEM power-train components and drive-train mechanisms are intergraded into the 3D modelling to ensure the chassis provides sufficient space. Besides that, a human dummy model is built into the 3D modelling to ensure the rider?s ergonomics and comfort. The chassis 3D model then undergoes stress-strain simulation. The simulation predicts the stress distribution, displacement and factor of safety (FOS). The data are used to identify the critical point, thus suggesting the chassis design is applicable or need to redesign/ modify to meet the require strength. Critical points mean highest stress which might cause the chassis to fail. This point occurs at the joints at triple tree and bracket rear absorber for a motorcycle chassis. As a conclusion, computational analysis predicts the stress distribution and guideline to develop a safe prototype chassis.
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Computational models of neuromodulation.
Fellous, J M; Linster, C
1998-05-15
Computational modeling of neural substrates provides an excellent theoretical framework for the understanding of the computational roles of neuromodulation. In this review, we illustrate, with a large number of modeling studies, the specific computations performed by neuromodulation in the context of various neural models of invertebrate and vertebrate preparations. We base our characterization of neuromodulations on their computational and functional roles rather than on anatomical or chemical criteria. We review the main framework in which neuromodulation has been studied theoretically (central pattern generation and oscillations, sensory processing, memory and information integration). Finally, we present a detailed mathematical overview of how neuromodulation has been implemented at the single cell and network levels in modeling studies. Overall, neuromodulation is found to increase and control computational complexity.
Input/output routines for a hybrid computer
International Nuclear Information System (INIS)
Izume, Akitada; Yodo, Terutaka; Sakama, Iwao; Sakamoto, Akira; Miyake, Osamu
1976-05-01
This report is concerned with data processing programs for a hybrid computer system. Especially pre-data processing of magnetic tapes which are recorded during the dynamic experiment by FACOM 270/25 data logging system in the 50 MW steam generator test facility is described in detail. The magnetic tape is a most effective recording medium for data logging, but recording formats of the magnetic tape are different between data logging systems. In our section, the final data analyses are performed by data in the disk of EAI-690 hybrid computer system, and to transfer all required information in magnetic tapes to the disk, the magnetic tape editing and data transit are necessary by sub-computer NEAC-3200 system. This report is written for users as a manual and reference hand book of pre-data processing between different type computers. (auth.)
Hybrid computer simulation of the dynamics of the Hoger Onderwijs Reactor
International Nuclear Information System (INIS)
Moers, J.C.; Vries, J.W. de.
1976-01-01
A distributed parameter model for the dynamics of the Hoger Onderwijs Reactor (HOR) at Delft is presented. The neutronic and the thermodynamic part of this model have been separately implemented on the AD4-IBM1800 Hybrid Computer of the Delft University of Technology Computation Centre. A continuous Space/Discrete Time solution method has been employed. Some test results of the simulation are included
Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer
International Nuclear Information System (INIS)
Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi
1975-10-01
Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)
Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry
DEFF Research Database (Denmark)
Xie, Tianwu; Kuster, Niels; Zaidi, Habib
2017-01-01
for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric...
Hybrid computer optimization of systems with random parameters
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
Modelling and Verifying Communication Failure of Hybrid Systems in HCSP
DEFF Research Database (Denmark)
Wang, Shuling; Nielson, Flemming; Nielson, Hanne Riis
2016-01-01
Hybrid systems are dynamic systems with interacting discrete computation and continuous physical processes. They have become ubiquitous in our daily life, e.g. automotive, aerospace and medical systems, and in particular, many of them are safety-critical. For a safety-critical hybrid system......, in the presence of communication failure, the expected control from the controller will get lost and as a consequence the physical process cannot behave as expected. In this paper, we mainly consider the communication failure caused by the non-engagement of one party in communication action, i.......e. the communication itself fails to occur. To address this issue, this paper proposes a formal framework by extending HCSP, a formal modeling language for hybrid systems, for modeling and verifying hybrid systems in the absence of receiving messages due to communication failure. We present two inference systems...
Evaluation of models generated via hybrid evolutionary algorithms ...
African Journals Online (AJOL)
2016-04-02
Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.
Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.
2018-03-01
The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.
A Hybrid Scheduler for Many Task Computing in Big Data Systems
Directory of Open Access Journals (Sweden)
Vasiliu Laura
2017-06-01
Full Text Available With the rapid evolution of the distributed computing world in the last few years, the amount of data created and processed has fast increased to petabytes or even exabytes scale. Such huge data sets need data-intensive computing applications and impose performance requirements to the infrastructures that support them, such as high scalability, storage, fault tolerance but also efficient scheduling algorithms. This paper focuses on providing a hybrid scheduling algorithm for many task computing that addresses big data environments with few penalties, taking into consideration the deadlines and satisfying a data dependent task model. The hybrid solution consists of several heuristics and algorithms (min-min, min-max and earliest deadline first combined in order to provide a scheduling algorithm that matches our problem. The experimental results are conducted by simulation and prove that the proposed hybrid algorithm behaves very well in terms of meeting deadlines.
Computer Modeling and Simulation
Energy Technology Data Exchange (ETDEWEB)
Pronskikh, V. S. [Fermilab
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes
International Nuclear Information System (INIS)
Li Chao; Ebert, Ute; Hundsdorfer, Willem
2010-01-01
Streamers are the first stage of sparks and lightning; they grow due to a strongly enhanced electric field at their tips; this field is created by a thin curved space charge layer. These multiple scales are already challenging when the electrons are approximated by densities. However, electron density fluctuations in the leading edge of the front and non-thermal stretched tails of the electron energy distribution (as a cause of X-ray emissions) require a particle model to follow the electron motion. But present computers cannot deal with all electrons in a fully developed streamer. Therefore, super-particle have to be introduced, which leads to wrong statistics and numerical artifacts. The method of choice is a hybrid computation in space where individual electrons are followed in the region of high electric field and low density while the bulk of the electrons is approximated by densities (or fluids). We here develop the hybrid coupling for planar fronts. First, to obtain a consistent flux at the interface between particle and fluid model in the hybrid computation, the widely used classical fluid model is replaced by an extended fluid model. Then the coupling algorithm and the numerical implementation of the spatially hybrid model are presented in detail, in particular, the position of the model interface and the construction of the buffer region. The method carries generic features of pulled fronts that can be applied to similar problems like large deviations in the leading edge of population fronts, etc.
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
HYbrid Coordinate Ocean Model (HYCOM): Global
National Oceanic and Atmospheric Administration, Department of Commerce — Global HYbrid Coordinate Ocean Model (HYCOM) and U.S. Navy Coupled Ocean Data Assimilation (NCODA) 3-day, daily forecast at approximately 9-km (1/12-degree)...
Travelling Waves in Hybrid Chemotaxis Models
Franz, Benjamin
2013-12-18
Hybrid models of chemotaxis combine agent-based models of cells with partial differential equation models of extracellular chemical signals. In this paper, travelling wave properties of hybrid models of bacterial chemotaxis are investigated. Bacteria are modelled using an agent-based (individual-based) approach with internal dynamics describing signal transduction. In addition to the chemotactic behaviour of the bacteria, the individual-based model also includes cell proliferation and death. Cells consume the extracellular nutrient field (chemoattractant), which is modelled using a partial differential equation. Mesoscopic and macroscopic equations representing the behaviour of the hybrid model are derived and the existence of travelling wave solutions for these models is established. It is shown that cell proliferation is necessary for the existence of non-transient (stationary) travelling waves in hybrid models. Additionally, a numerical comparison between the wave speeds of the continuum models and the hybrid models shows good agreement in the case of weak chemotaxis and qualitative agreement for the strong chemotaxis case. In the case of slow cell adaptation, we detect oscillating behaviour of the wave, which cannot be explained by mean-field approximations. © 2013 Society for Mathematical Biology.
International Nuclear Information System (INIS)
Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M
2014-01-01
The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.
Computational Intelligence, Cyber Security and Computational Models
Anitha, R; Lekshmi, R; Kumar, M; Bonato, Anthony; Graña, Manuel
2014-01-01
This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications for design, analysis, and modeling of computational intelligence and security. The book will be useful material for students, researchers, professionals, and academicians. It will help in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.
Computationally Modeling Interpersonal Trust
Directory of Open Access Journals (Sweden)
Jin Joo eLee
2013-12-01
Full Text Available We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind’s readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naivete' of this domain knowledge. We then present the construction of hidden Markov models to incorporate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.
Long, M. S.; Yantosca, R.; Nielsen, J.; Linford, J. C.; Keller, C. A.; Payer Sulprizio, M.; Jacob, D. J.
2014-12-01
The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research community, has been reengineered to serve as a platform for a range of computational atmospheric chemistry science foci and applications. Development included modularization for coupling to general circulation and Earth system models (ESMs) and the adoption of co-processor capable atmospheric chemistry solvers. This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of GEOS-Chem scientific code to permit seamless transition from the GEOS-Chem stand-alone serial CTM to deployment as a coupled ESM module. In this manner, the continual stream of updates contributed by the CTM user community is automatically available for broader applications, which remain state-of-science and directly referenceable to the latest version of the standard GEOS-Chem CTM. These developments are now available as part of the standard version of the GEOS-Chem CTM. The system has been implemented as an atmospheric chemistry module within the NASA GEOS-5 ESM. The coupled GEOS-5/GEOS-Chem system was tested for weak and strong scalability and performance with a tropospheric oxidant-aerosol simulation. Results confirm that the GEOS-Chem chemical operator scales efficiently for any number of processes. Although inclusion of atmospheric chemistry in ESMs is computationally expensive, the excellent scalability of the chemical operator means that the relative cost goes down with increasing number of processes, making fine-scale resolution simulations possible.
Towards Modelling of Hybrid Systems
DEFF Research Database (Denmark)
Wisniewski, Rafal
2006-01-01
system consists of a number of dynamical systems that are glued together according to information encoded in the discrete part of the system. We develop a definition of a hybrid system as a functor from the category generated by a transition system to the category of directed topological spaces. Its...
Chaos Modelling with Computers
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Chaos Modelling with Computers Unpredicatable Behaviour of Deterministic Systems. Balakrishnan Ramasamy T S K V Iyer. General Article Volume 1 Issue 5 May 1996 pp 29-39 ...
A hybrid mammalian cell cycle model
Directory of Open Access Journals (Sweden)
Vincent Noël
2013-08-01
Full Text Available Hybrid modeling provides an effective solution to cope with multiple time scales dynamics in systems biology. Among the applications of this method, one of the most important is the cell cycle regulation. The machinery of the cell cycle, leading to cell division and proliferation, combines slow growth, spatio-temporal re-organisation of the cell, and rapid changes of regulatory proteins concentrations induced by post-translational modifications. The advancement through the cell cycle comprises a well defined sequence of stages, separated by checkpoint transitions. The combination of continuous and discrete changes justifies hybrid modelling approaches to cell cycle dynamics. We present a piecewise-smooth version of a mammalian cell cycle model, obtained by hybridization from a smooth biochemical model. The approximate hybridization scheme, leading to simplified reaction rates and binary event location functions, is based on learning from a training set of trajectories of the smooth model. We discuss several learning strategies for the parameters of the hybrid model.
Evaluation of a Compact Hybrid Brain-Computer Interface System
Directory of Open Access Journals (Sweden)
Jaeyoung Shin
2017-01-01
Full Text Available We realized a compact hybrid brain-computer interface (BCI system by integrating a portable near-infrared spectroscopy (NIRS device with an economical electroencephalography (EEG system. The NIRS array was located on the subjects’ forehead, covering the prefrontal area. The EEG electrodes were distributed over the frontal, motor/temporal, and parietal areas. The experimental paradigm involved a Stroop word-picture matching test in combination with mental arithmetic (MA and baseline (BL tasks, in which the subjects were asked to perform either MA or BL in response to congruent or incongruent conditions, respectively. We compared the classification accuracies of each of the modalities (NIRS or EEG with that of the hybrid system. We showed that the hybrid system outperforms the unimodal EEG and NIRS systems by 6.2% and 2.5%, respectively. Since the proposed hybrid system is based on portable platforms, it is not confined to a laboratory environment and has the potential to be used in real-life situations, such as in neurorehabilitation.
A Hybrid 3D Indoor Space Model
Directory of Open Access Journals (Sweden)
A. Jamali
2016-10-01
Full Text Available GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM, Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
CSP: A Multifaceted Hybrid Architecture for Space Computing
Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron
2014-01-01
Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.
Hybrid simulation models of production networks
Kouikoglou, Vassilis S
2001-01-01
This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.
International Nuclear Information System (INIS)
Max, G
2011-01-01
Traffic models in computer networks can be described as a complicated system. These systems show non-linear features and to simulate behaviours of these systems are also difficult. Before implementing network equipments users wants to know capability of their computer network. They do not want the servers to be overloaded during temporary traffic peaks when more requests arrive than the server is designed for. As a starting point for our study a non-linear system model of network traffic is established to exam behaviour of the network planned. The paper presents setting up a non-linear simulation model that helps us to observe dataflow problems of the networks. This simple model captures the relationship between the competing traffic and the input and output dataflow. In this paper, we also focus on measuring the bottleneck of the network, which was defined as the difference between the link capacity and the competing traffic volume on the link that limits end-to-end throughput. We validate the model using measurements on a working network. The results show that the initial model estimates well main behaviours and critical parameters of the network. Based on this study, we propose to develop a new algorithm, which experimentally determines and predict the available parameters of the network modelled.
Fatigue of hybrid glass/carbon composites: 3D computational studies
DEFF Research Database (Denmark)
Dai, Gaoming; Mishnaevsky, Leon
2014-01-01
3D computational simulations of fatigue of hybrid carbon/glass fiber reinforced composites is carried out using X-FEM and multifiber unit cell models. A new software code for the automatic generation of unit cell multifiber models of composites with randomly misaligned fibers of various properties...... and geometrical parameters is developed. With the use of this program code and the X-FEM method, systematic investigations of the effect of microstructure of hybrid composites (fraction of carbon versus glass fibers, misalignment, and interface strength) and the loading conditions (tensile versus compression...... cyclic loading effects) on fatigue behavior of the materials are carried out. It was demonstrated that the higher fraction of carbon fibers in hybrid composites is beneficial for the fatigue lifetime of the composites under tension-tension cyclic loading, but might have negative effect on the lifetime...
Hybrid NN/SVM Computational System for Optimizing Designs
Rai, Man Mohan
2009-01-01
A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily
Weather forecasting based on hybrid neural model
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
Autonomic Management of Application Workflows on Hybrid Computing Infrastructure
Directory of Open Access Journals (Sweden)
Hyunjoo Kim
2011-01-01
Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.
OPTHYLIC: An Optimised Tool for Hybrid Limits Computation
Busato, Emmanuel; Calvet, David; Theveneaux-Pelzer, Timothée
2018-05-01
A software tool, computing observed and expected upper limits on Poissonian process rates using a hybrid frequentist-Bayesian CLs method, is presented. This tool can be used for simple counting experiments where only signal, background and observed yields are provided or for multi-bin experiments where binned distributions of discriminating variables are provided. It allows the combination of several channels and takes into account statistical and systematic uncertainties, as well as correlations of systematic uncertainties between channels. It has been validated against other software tools and analytical calculations, for several realistic cases.
Frank, M; Pacheco, Andreu
1998-01-01
This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...
Energy Technology Data Exchange (ETDEWEB)
Kopper, Claudio, E-mail: claudio.kopper@nikhef.nl [NIKHEF, Science Park 105, 1098 XG Amsterdam (Netherlands)
2013-10-11
Completed in 2008, Antares is now the largest water Cherenkov neutrino telescope in the Northern Hemisphere. Its main goal is to detect neutrinos from galactic and extra-galactic sources. Due to the high background rate of atmospheric muons and the high level of bioluminescence, several on-line and off-line filtering algorithms have to be applied to the raw data taken by the instrument. To be able to handle this data stream, a dedicated computing infrastructure has been set up. The paper covers the main aspects of the current official Antares computing model. This includes an overview of on-line and off-line data handling and storage. In addition, the current usage of the “IceTray” software framework for Antares data processing is highlighted. Finally, an overview of the data storage formats used for high-level analysis is given.
A Review of Hybrid Brain-Computer Interface Systems
Directory of Open Access Journals (Sweden)
Setare Amiri
2013-01-01
Full Text Available Increasing number of research activities and different types of studies in brain-computer interface (BCI systems show potential in this young research area. Research teams have studied features of different data acquisition techniques, brain activity patterns, feature extraction techniques, methods of classifications, and many other aspects of a BCI system. However, conventional BCIs have not become totally applicable, due to the lack of high accuracy, reliability, low information transfer rate, and user acceptability. A new approach to create a more reliable BCI that takes advantage of each system is to combine two or more BCI systems with different brain activity patterns or different input signal sources. This type of BCI, called hybrid BCI, may reduce disadvantages of each conventional BCI system. In addition, hybrid BCIs may create more applications and possibly increase the accuracy and the information transfer rate. However, the type of BCIs and their combinations should be considered carefully. In this paper, after introducing several types of BCIs and their combinations, we review and discuss hybrid BCIs, different possibilities to combine them, and their advantages and disadvantages.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
Hybrid quantum teleportation: A theoretical model
Energy Technology Data Exchange (ETDEWEB)
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira [Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2014-12-04
Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Ignatova, Zoya; Zimmermann, Karl-Heinz
2008-01-01
In this excellent text, the reader is given a comprehensive introduction to the field of DNA computing. The book emphasizes computational methods to tackle central problems of DNA computing, such as controlling living cells, building patterns, and generating nanomachines.
A Novel Hybrid Similarity Calculation Model
Directory of Open Access Journals (Sweden)
Xiaoping Fan
2017-01-01
Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.
Model Predictive Control for Connected Hybrid Electric Vehicles
Directory of Open Access Journals (Sweden)
Kaijiang Yu
2015-01-01
Full Text Available This paper presents a new model predictive control system for connected hybrid electric vehicles to improve fuel economy. The new features of this study are as follows. First, the battery charge and discharge profile and the driving velocity profile are simultaneously optimized. One is energy management for HEV for Pbatt; the other is for the energy consumption minimizing problem of acc control of two vehicles. Second, a system for connected hybrid electric vehicles has been developed considering varying drag coefficients and the road gradients. Third, the fuel model of a typical hybrid electric vehicle is developed using the maps of the engine efficiency characteristics. Fourth, simulations and analysis (under different parameters, i.e., road conditions, vehicle state of charge, etc. are conducted to verify the effectiveness of the method to achieve higher fuel efficiency. The model predictive control problem is solved using numerical computation method: continuation and generalized minimum residual method. Computer simulation results reveal improvements in fuel economy using the proposed control method.
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
International Nuclear Information System (INIS)
Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.
2011-01-01
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.
A hybrid absorbing boundary condition for frequency-domain finite-difference modelling
International Nuclear Information System (INIS)
Ren, Zhiming; Liu, Yang
2013-01-01
Liu and Sen (2010 Geophysics 75 A1–6; 2012 Geophys. Prospect. 60 1114–32) proposed an efficient hybrid scheme to significantly absorb boundary reflections for acoustic and elastic wave modelling in the time domain. In this paper, we extend the hybrid absorbing boundary condition (ABC) into the frequency domain and develop specific strategies for regular-grid and staggered-grid modelling, respectively. Numerical modelling tests of acoustic, visco-acoustic, elastic and vertically transversely isotropic (VTI) equations show significant absorptions for frequency-domain modelling. The modelling results of the Marmousi model and the salt model also demonstrate the effectiveness of the hybrid ABC. For elastic modelling, the hybrid Higdon ABC and the hybrid Clayton and Engquist (CE) ABC are implemented, respectively. Numerical simulations show that the hybrid Higdon ABC gets better absorption than the hybrid CE ABC, especially for S-waves. We further compare the hybrid ABC with the classical perfectly matched layer (PML). Results show that the two ABCs cost the same computation time and memory space for the same absorption width. However, the hybrid ABC is more effective than the PML for the same small absorption width and the absorption effects of the two ABCs gradually become similar when the absorption width is increased. (paper)
Hybrid Energy System Modeling in Modelica
Energy Technology Data Exchange (ETDEWEB)
William R. Binder; Christiaan J. J. Paredis; Humberto E. Garcia
2014-03-01
In this paper, a Hybrid Energy System (HES) configuration is modeled in Modelica. Hybrid Energy Systems (HES) have as their defining characteristic the use of one or more energy inputs, combined with the potential for multiple energy outputs. Compared to traditional energy systems, HES provide additional operational flexibility so that high variability in both energy production and consumption levels can be absorbed more effectively. This is particularly important when including renewable energy sources, whose output levels are inherently variable, determined by nature. The specific HES configuration modeled in this paper include two energy inputs: a nuclear plant, and a series of wind turbines. In addition, the system produces two energy outputs: electricity and synthetic fuel. The models are verified through simulations of the individual components, and the system as a whole. The simulations are performed for a range of component sizes, operating conditions, and control schemes.
Plasticity modeling & computation
Borja, Ronaldo I
2013-01-01
There have been many excellent books written on the subject of plastic deformation in solids, but rarely can one find a textbook on this subject. “Plasticity Modeling & Computation” is a textbook written specifically for students who want to learn the theoretical, mathematical, and computational aspects of inelastic deformation in solids. It adopts a simple narrative style that is not mathematically overbearing, and has been written to emulate a professor giving a lecture on this subject inside a classroom. Each section is written to provide a balance between the relevant equations and the explanations behind them. Where relevant, sections end with one or more exercises designed to reinforce the understanding of the “lecture.” Color figures enhance the presentation and make the book very pleasant to read. For professors planning to use this textbook for their classes, the contents are sufficient for Parts A and B that can be taught in sequence over a period of two semesters or quarters.
Maze learning by a hybrid brain-computer system.
Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan
2016-09-13
The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.
Maze learning by a hybrid brain-computer system
Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan
2016-09-01
The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.
Optimization of hybrid model on hajj travel
Cahyandari, R.; Ariany, R. L.; Sukono
2018-03-01
Hajj travel insurance is an insurance product offered by the insurance company in preparing funds to perform the pilgrimage. This insurance product helps would-be pilgrims to set aside a fund of saving hajj with regularly, but also provides funds of profit sharing (mudharabah) and insurance protection. Scheme of insurance product fund management is largely using the hybrid model, which is the fund from would-be pilgrims will be divided into three account management, that is personal account, tabarru’, and ujrah. Scheme of hybrid model on hajj travel insurance was already discussed at the earlier paper with titled “The Hybrid Model Algorithm on Sharia Insurance”, taking the example case of Mitra Mabrur Plus product from Bumiputera company. On these advanced paper, will be made the previous optimization model design, with partition of benefit the tabarru’ account. Benefits such as compensation for 40 critical illness which initially only for participants of insurance only, on optimization is intended for participants of the insurance and his heir, also to benefit the hospital bills. Meanwhile, the benefits of death benefit is given if the participant is fixed die.
Models of optical quantum computing
Directory of Open Access Journals (Sweden)
Krovi Hari
2017-03-01
Full Text Available I review some work on models of quantum computing, optical implementations of these models, as well as the associated computational power. In particular, we discuss the circuit model and cluster state implementations using quantum optics with various encodings such as dual rail encoding, Gottesman-Kitaev-Preskill encoding, and coherent state encoding. Then we discuss intermediate models of optical computing such as boson sampling and its variants. Finally, we review some recent work in optical implementations of adiabatic quantum computing and analog optical computing. We also provide a brief description of the relevant aspects from complexity theory needed to understand the results surveyed.
Three-dimensional pseudo-random number generator for implementing in hybrid computer systems
International Nuclear Information System (INIS)
Ivanov, M.A.; Vasil'ev, N.P.; Voronin, A.V.; Kravtsov, M.Yu.; Maksutov, A.A.; Spiridonov, A.A.; Khudyakova, V.I.; Chugunkov, I.V.
2012-01-01
The algorithm for generating pseudo-random numbers oriented to implementation by using hybrid computer systems is considered. The proposed solution is characterized by a high degree of parallel computing [ru
Gravitational waves in hybrid quintessential inflationary models
Energy Technology Data Exchange (ETDEWEB)
Sa, Paulo M [Departamento de Fisica, Faculdade de Ciencias e Tecnologia, Universidade do Algarve, Campus de Gambelas, 8005-139 Faro (Portugal); Henriques, Alfredo B, E-mail: pmsa@ualg.pt, E-mail: alfredo.henriques@ist.utl.pt [Centro Multidisciplinar de Astrofisica - CENTRA and Departamento de Fisica, Instituto Superior Tecnico, UTL, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2011-09-22
The generation of primordial gravitational waves is investigated within the hybrid quintessential inflationary model. Using the method of continuous Bogoliubov coefficients, we calculate the full gravitational-wave energy spectrum. The post-inflationary kination period, characteristic of quintessential inflationary models, leaves a clear signature on the spectrum, namely, a sharp rise of the gravitational-wave spectral energy density {Omega}{sub GW} at high frequencies. For appropriate values of the parameters of the model, {Omega}{sub GW} can be as high as 10{sup -12} in the MHz-GHz range of frequencies.
Gravitational waves in hybrid quintessential inflationary models
International Nuclear Information System (INIS)
Sa, Paulo M; Henriques, Alfredo B
2011-01-01
The generation of primordial gravitational waves is investigated within the hybrid quintessential inflationary model. Using the method of continuous Bogoliubov coefficients, we calculate the full gravitational-wave energy spectrum. The post-inflationary kination period, characteristic of quintessential inflationary models, leaves a clear signature on the spectrum, namely, a sharp rise of the gravitational-wave spectral energy density Ω GW at high frequencies. For appropriate values of the parameters of the model, Ω GW can be as high as 10 -12 in the MHz-GHz range of frequencies.
Modelling Chemical Preservation of Plantain Hybrid Fruits
Directory of Open Access Journals (Sweden)
Ogueri Nwaiwu
2017-08-01
Full Text Available New plantain hybrids plants have been developed but not much has been done on the post-harvest keeping quality of the fruits and how they are affected by microbial colonization. Hence fruits from a tetraploid hybrid PITA 2 (TMPx 548-9 obtained by crossing plantain varieties Obino l’Ewai and Calcutta 4 (AA and two local triploid (AAB plantain landraces Agbagba and Obino l’Ewai were subjected to various concentrations of acetic, sorbic and propionic acid to determine the impact of chemical concentration, chemical type and plantain variety on ripening and weight loss of plantain fruits. Analysis of titratable acidity, moisture content and total soluble solids showed that there were no significant differences between fruits of hybrid and local varieties. The longest time to ripening from harvest (24 days was achieved with fruits of Agbagba treated with 3% propionic acid. However, fruits of PITA 2 hybrid treated with propionic and sorbic acid at 3% showed the longest green life which indicated that the chemicals may work better at higher concentrations. The Obino l’Ewai cultivar had the highest weight loss for all chemical types used. Modelling data obtained showed that plantain variety had the most significant effect on ripening and indicates that ripening of the fruits may depend on the plantain variety. It appears that weight loss of fruits from the plantain hybrid and local cultivars was not affected by the plantain variety, chemical type. The chemicals at higher concentrations may have an effect on ripening of the fruits and will need further investigation.
Comments On Clock Models In Hybrid Automata And Hybrid Control Systems
Directory of Open Access Journals (Sweden)
Virginia Ecaterina OLTEAN
2001-12-01
Full Text Available Hybrid systems have received a lot of attention in the past decade and a number of different models have been proposed in order to establish mathematical framework that is able to handle both continuous and discrete aspects. This contribution is focused on two models: hybrid automata and hybrid control systems with continuous-discrete interface and the importance of clock models is emphasized. Simple and relevant examples, some taken from the literature, accompany the presentation.
International Nuclear Information System (INIS)
Jeong, Jong Hwi; Choi, Sang Hyoun; Cho, Sung Koo; Kim, Chan Hyeong
2007-01-01
The anthropomorphic computational phantoms are classified into two groups. One group is the stylized phantoms, or MIRD phantoms, which are based on mathematical representations of the anatomical structures. The shapes and positions of the organs and tissues in these phantoms can be adjusted by changing the coefficients of the equations in use. The other group is the voxel phantoms, which are based on tomographic images of a real person such as CT, MR and serially sectioned color slice images from a cadaver. Obviously, the voxel phantoms represent the anatomical structures of a human body much more realistically than the stylized phantoms. A realistic representation of anatomical structure is very important for an accurate calculation of radiation dose in the human body. Consequently, the ICRP recently has decided to use the voxel phantoms for the forthcoming update of the dose conversion coefficients. However, the voxel phantoms also have some limitations: (1) The topology and dimensions of the organs and tissues in a voxel model are extremely difficult to change, and (2) The thin organs, such as oral mucosa and skin, cannot be realistically modeled unless the voxel resolution is prohibitively high. Recently, a new approach has been implemented by several investigators. The investigators converted their voxel phantoms to hybrid computational phantoms based on NURBS (Non-Uniform Rational B-Splines) surface, which is smooth and deformable. It is claimed that these new phantoms have the flexibility of the stylized phantom along with the realistic representations of the anatomical structures. The topology and dimensions of the anatomical structures can be easily changed as necessary. Thin organs can be modeled without affecting computational speed or memory requirement. The hybrid phantoms can be also used for 4-D Monte Carlo simulations. In this preliminary study, the external shape of a voxel phantom (i.e., skin), HDRK-Man, was converted to a hybrid computational
Hybrid Epidemics - A Case Study on Computer Worm Conficker
Zhang, Changwang; Zhou, Shi; Chain, Benjamin M.
2014-01-01
Conficker is a computer worm that erupted on the Internet in 2008. It is unique in combining three different spreading strategies: local probing, neighbourhood probing, and global probing. We propose a mathematical model that combines three modes of spreading: local, neighbourhood, and global, to capture the worm's spreading behaviour. The parameters of the model are inferred directly from network data obtained during the first day of the Conficker epidemic. The model is then used to explore ...
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
2015-04-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced
Computational Aerodynamic Modeling of Small Quadcopter Vehicles
Yoon, Seokkwan; Ventura Diaz, Patricia; Boyd, D. Douglas; Chan, William M.; Theodore, Colin R.
2017-01-01
High-fidelity computational simulations have been performed which focus on rotor-fuselage and rotor-rotor aerodynamic interactions of small quad-rotor vehicle systems. The three-dimensional unsteady Navier-Stokes equations are solved on overset grids using high-order accurate schemes, dual-time stepping, low Mach number preconditioning, and hybrid turbulence modeling. Computational results for isolated rotors are shown to compare well with available experimental data. Computational results in hover reveal the differences between a conventional configuration where the rotors are mounted above the fuselage and an unconventional configuration where the rotors are mounted below the fuselage. Complex flow physics in forward flight is investigated. The goal of this work is to demonstrate that understanding of interactional aerodynamics can be an important factor in design decisions regarding rotor and fuselage placement for next-generation multi-rotor drones.
Solving Problem of Graph Isomorphism by Membrane-Quantum Hybrid Model
Directory of Open Access Journals (Sweden)
Artiom Alhazov
2015-10-01
Full Text Available This work presents the application of new parallelization methods based on membrane-quantum hybrid computing to graph isomorphism problem solving. Applied membrane-quantum hybrid computational model was developed by authors. Massive parallelism of unconventional computing is used to implement classic brute force algorithm efficiently. This approach does not suppose any restrictions of considered graphs types. The estimated performance of the model is less then quadratic that makes a very good result for the problem of \\textbf{NP} complexity.
The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry
Energy Technology Data Exchange (ETDEWEB)
Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)
2011-08-07
Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations
The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry
International Nuclear Information System (INIS)
Maynard, Matthew R; Geyer, John W; Bolch, Wesley; Aris, John P; Shifrin, Roger Y
2011-01-01
Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in
A Hybrid Soft Computing Approach for Subset Problems
Directory of Open Access Journals (Sweden)
Broderick Crawford
2013-01-01
Full Text Available Subset problems (set partitioning, packing, and covering are formal models for many practical optimization problems. A set partitioning problem determines how the items in one set (S can be partitioned into smaller subsets. All items in S must be contained in one and only one partition. Related problems are set packing (all items must be contained in zero or one partitions and set covering (all items must be contained in at least one partition. Here, we present a hybrid solver based on ant colony optimization (ACO combined with arc consistency for solving this kind of problems. ACO is a swarm intelligence metaheuristic inspired on ants behavior when they search for food. It allows to solve complex combinatorial problems for which traditional mathematical techniques may fail. By other side, in constraint programming, the solving process of Constraint Satisfaction Problems can dramatically reduce the search space by means of arc consistency enforcing constraint consistencies either prior to or during search. Our hybrid approach was tested with set covering and set partitioning dataset benchmarks. It was observed that the performance of ACO had been improved embedding this filtering technique in its constructive phase.
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
AMITIS: A 3D GPU-Based Hybrid-PIC Model for Space and Plasma Physics
Fatemi, Shahab; Poppe, Andrew R.; Delory, Gregory T.; Farrell, William M.
2017-05-01
We have developed, for the first time, an advanced modeling infrastructure in space simulations (AMITIS) with an embedded three-dimensional self-consistent grid-based hybrid model of plasma (kinetic ions and fluid electrons) that runs entirely on graphics processing units (GPUs). The model uses NVIDIA GPUs and their associated parallel computing platform, CUDA, developed for general purpose processing on GPUs. The model uses a single CPU-GPU pair, where the CPU transfers data between the system and GPU memory, executes CUDA kernels, and writes simulation outputs on the disk. All computations, including moving particles, calculating macroscopic properties of particles on a grid, and solving hybrid model equations are processed on a single GPU. We explain various computing kernels within AMITIS and compare their performance with an already existing well-tested hybrid model of plasma that runs in parallel using multi-CPU platforms. We show that AMITIS runs ∼10 times faster than the parallel CPU-based hybrid model. We also introduce an implicit solver for computation of Faraday’s Equation, resulting in an explicit-implicit scheme for the hybrid model equation. We show that the proposed scheme is stable and accurate. We examine the AMITIS energy conservation and show that the energy is conserved with an error < 0.2% after 500,000 timesteps, even when a very low number of particles per cell is used.
A physicist's model of computation
International Nuclear Information System (INIS)
Fredkin, E.
1991-01-01
An attempt is presented to make a statement about what a computer is and how it works from the perspective of physics. The single observation that computation can be a reversible process allows for the same kind of insight into computing as was obtained by Carnot's discovery that heat engines could be modelled as reversible processes. It allows us to bring computation into the realm of physics, where the power of physics allows us to ask and answer questions that seemed intractable from the viewpoint of computer science. Strangely enough, this effort makes it clear why computers get cheaper every year. (author) 14 refs., 4 figs
Computational modeling in biomechanics
Mofrad, Mohammad
2010-01-01
This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..
Design of Xen Hybrid Multiple Police Model
Sun, Lei; Lin, Renhao; Zhu, Xianwei
2017-10-01
Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
Mathematical Modeling and Computational Thinking
Sanford, John F.; Naidu, Jaideep T.
2017-01-01
The paper argues that mathematical modeling is the essence of computational thinking. Learning a computer language is a valuable assistance in learning logical thinking but of less assistance when learning problem-solving skills. The paper is third in a series and presents some examples of mathematical modeling using spreadsheets at an advanced…
Hybrid CFD/CAA Modeling for Liftoff Acoustic Predictions
Strutzenberg, Louise L.; Liever, Peter A.
2011-01-01
This paper presents development efforts at the NASA Marshall Space flight Center to establish a hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) simulation system for launch vehicle liftoff acoustics environment analysis. Acoustic prediction engineering tools based on empirical jet acoustic strength and directivity models or scaled historical measurements are of limited value in efforts to proactively design and optimize launch vehicles and launch facility configurations for liftoff acoustics. CFD based modeling approaches are now able to capture the important details of vehicle specific plume flow environment, identifY the noise generation sources, and allow assessment of the influence of launch pad geometric details and sound mitigation measures such as water injection. However, CFD methodologies are numerically too dissipative to accurately capture the propagation of the acoustic waves in the large CFD models. The hybrid CFD/CAA approach combines the high-fidelity CFD analysis capable of identifYing the acoustic sources with a fast and efficient Boundary Element Method (BEM) that accurately propagates the acoustic field from the source locations. The BEM approach was chosen for its ability to properly account for reflections and scattering of acoustic waves from launch pad structures. The paper will present an overview of the technology components of the CFD/CAA framework and discuss plans for demonstration and validation against test data.
COMPUTATIONAL MODELS FOR SUSTAINABLE DEVELOPMENT
Monendra Grover; Rajesh Kumar; Tapan Kumar Mondal; S. Rajkumar
2011-01-01
Genetic erosion is a serious problem and computational models have been developed to prevent it. The computational modeling in this field not only includes (terrestrial) reserve design, but also decision modeling for related problems such as habitat restoration, marine reserve design, and nonreserve approaches to conservation management. Models have been formulated for evaluating tradeoffs between socioeconomic, biophysical, and spatial criteria in establishing marine reserves. The percolatio...
Warren, Kerryn A; Ritzman, Terrence B; Humphreys, Robyn A; Percival, Christopher J; Hallgrímsson, Benedikt; Ackermann, Rebecca Rogers
2018-03-01
Hybridization occurs in a number of mammalian lineages, including among primate taxa. Analyses of ancient genomes have shown that hybridization between our lineage and other archaic hominins in Eurasia occurred numerous times in the past. However, we still have limited empirical data on what a hybrid skeleton looks like, or how to spot patterns of hybridization among fossils for which there are no genetic data. Here we use experimental mouse models to supplement previous studies of primates. We characterize size and shape variation in the cranium and mandible of three wild-derived inbred mouse strains and their first generation (F 1 ) hybrids. The three parent taxa in our analysis represent lineages that diverged over approximately the same period as the human/Neanderthal/Denisovan lineages and their hybrids are variably successful in the wild. Comparisons of body size, as quantified by long bone measurements, are also presented to determine whether the identified phenotypic effects of hybridization are localized to the cranium or represent overall body size changes. The results indicate that hybrid cranial and mandibular sizes, as well as limb length, exceed that of the parent taxa in all cases. All three F 1 hybrid crosses display similar patterns of size and form variation. These results are generally consistent with earlier studies on primates and other mammals, suggesting that the effects of hybridization may be similar across very different scenarios of hybridization, including different levels of hybrid fitness. This paper serves to supplement previous studies aimed at identifying F 1 hybrids in the fossil record and to introduce further research that will explore hybrid morphologies using mice as a proxy for better understanding hybridization in the hominin fossil record. Copyright © 2017 Elsevier Ltd. All rights reserved.
A hybrid modeling approach for option pricing
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Computer-Aided Modeling Framework
DEFF Research Database (Denmark)
Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul
Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...
Fluid and hybrid models for streamers
Bonaventura, Zdeněk
2016-09-01
Streamers are contracted ionizing waves with self-generated field enhancement that propagate into a low-ionized medium exposed to high electric field leaving filamentary trails of plasma behind. The widely used model to study streamer dynamics is based on drift-diffusion equations for electrons and ions, assuming local field approximation, coupled with Poisson's equation. For problems where presence of energetic electrons become important a fluid approach needs to be extended by a particle model, accompanied also with Monte Carlo Collision technique, that takes care of motion of these electrons. A combined fluid-particle approach is used to study an influence of surface emission processes on a fast-pulsed dielectric barrier discharge in air at atmospheric pressure. It is found that fluid-only model predicts substantially faster reignition dynamics compared to coupled fluid-particle model. Furthermore, a hybrid model can be created in which the population of electrons is divided in the energy space into two distinct groups: (1) low energy `bulk' electrons that are treated with fluid model, and (2) high energy `beam' electrons, followed as particles. The hybrid model is then capable not only to deal with streamer discharges in laboratory conditions, but also allows us to study electron acceleration in streamer zone of lighting leaders. There, the production of fast electrons from streamers is investigated, since these (runaway) electrons act as seeds for the relativistic runaway electron avalanche (RREA) mechanism, important for high-energy atmospheric physics phenomena. Results suggest that high energy electrons effect the streamer propagation, namely the velocity, the peak electric field, and thus also the production rate of runaway electrons. This work has been supported by the Czech Science Foundation research project 15-04023S.
Particle modeling of plasmas computational plasma physics
International Nuclear Information System (INIS)
Dawson, J.M.
1991-01-01
Recently, through the development of supercomputers, a powerful new method for exploring plasmas has emerged; it is computer modeling of plasmas. Such modeling can duplicate many of the complex processes that go on in a plasma and allow scientists to understand what the important processes are. It helps scientists gain an intuition about this complex state of matter. It allows scientists and engineers to explore new ideas on how to use plasma before building costly experiments; it allows them to determine if they are on the right track. It can duplicate the operation of devices and thus reduce the need to build complex and expensive devices for research and development. This is an exciting new endeavor that is in its infancy, but which can play an important role in the scientific and technological competitiveness of the US. There are a wide range of plasma models that are in use. There are particle models, fluid models, hybrid particle fluid models. These can come in many forms, such as explicit models, implicit models, reduced dimensional models, electrostatic models, magnetostatic models, electromagnetic models, and almost an endless variety of other models. Here the author will only discuss particle models. He will give a few examples of the use of such models; these will be taken from work done by the Plasma Modeling Group at UCLA because he is most familiar with work. However, it only gives a small view of the wide range of work being done around the US, or for that matter around the world
Modeling of renewable hybrid energy sources
Directory of Open Access Journals (Sweden)
Dumitru Cristian Dragos
2009-12-01
Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.
Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.
Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui
2017-01-01
To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.
Hybrid Models of Alternative Current Filter for Hvdc
Directory of Open Access Journals (Sweden)
Ufa Ruslan A.
2017-01-01
Full Text Available Based on a hybrid simulation concept of HVDC, the developed hybrid AC filter models, providing the sufficiently full and adequate modeling of all single continuous spectrum of quasi-steady-state and transient processes in the filter, are presented. The obtained results suggest that usage of the hybrid simulation approach is carried out a methodically accurate with guaranteed instrumental error solution of differential equation systems of mathematical models of HVDC.
Stochastic linear hybrid systems: Modeling, estimation, and application
Seah, Chze Eng
Hybrid systems are dynamical systems which have interacting continuous state and discrete state (or mode). Accurate modeling and state estimation of hybrid systems are important in many applications. We propose a hybrid system model, known as the Stochastic Linear Hybrid System (SLHS), to describe hybrid systems with stochastic linear system dynamics in each mode and stochastic continuous-state-dependent mode transitions. We then develop a hybrid estimation algorithm, called the State-Dependent-Transition Hybrid Estimation (SDTHE) algorithm, to estimate the continuous state and discrete state of the SLHS from noisy measurements. It is shown that the SDTHE algorithm is more accurate or more computationally efficient than existing hybrid estimation algorithms. Next, we develop a performance analysis algorithm to evaluate the performance of the SDTHE algorithm in a given operating scenario. We also investigate sufficient conditions for the stability of the SDTHE algorithm. The proposed SLHS model and SDTHE algorithm are illustrated to be useful in several applications. In Air Traffic Control (ATC), to facilitate implementations of new efficient operational concepts, accurate modeling and estimation of aircraft trajectories are needed. In ATC, an aircraft's trajectory can be divided into a number of flight modes. Furthermore, as the aircraft is required to follow a given flight plan or clearance, its flight mode transitions are dependent of its continuous state. However, the flight mode transitions are also stochastic due to navigation uncertainties or unknown pilot intents. Thus, we develop an aircraft dynamics model in ATC based on the SLHS. The SDTHE algorithm is then used in aircraft tracking applications to estimate the positions/velocities of aircraft and their flight modes accurately. Next, we develop an aircraft conformance monitoring algorithm to detect any deviations of aircraft trajectories in ATC that might compromise safety. In this application, the SLHS
Hybrid Modeling Improves Health and Performance Monitoring
2007-01-01
Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.
Analysis of chromosome aberration data by hybrid-scale models
International Nuclear Information System (INIS)
Indrawati, Iwiq; Kumazawa, Shigeru
2000-02-01
This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Directory of Open Access Journals (Sweden)
Lukas Falat
2016-01-01
Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Hybrid reduced order modeling for assembly calculations
International Nuclear Information System (INIS)
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-01-01
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2015-12-15
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Rough – Granular Computing knowledge discovery models
Directory of Open Access Journals (Sweden)
Mohammed M. Eissa
2016-11-01
Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.
The UF family of reference hybrid phantoms for computational radiation dosimetry
International Nuclear Information System (INIS)
Lee, Choonsik; Lodwick, Daniel; Hurtado, Jorge; Pafundi, Deanna; Williams, Jonathan L; Bolch, Wesley E
2010-01-01
Computational human phantoms are computer models used to obtain dose distributions within the human body exposed to internal or external radiation sources. In addition, they are increasingly used to develop detector efficiencies for in vivo whole-body counters. Two classes of computational human phantoms have been widely utilized for dosimetry calculation: stylized and voxel phantoms that describe human anatomy through mathematical surface equations and 3D voxel matrices, respectively. Stylized phantoms are flexible in that changes to organ position and shape are possible given avoidance of region overlap, while voxel phantoms are typically fixed to a given patient anatomy, yet can be proportionally scaled to match individuals of larger or smaller stature, but of equivalent organ anatomy. Voxel phantoms provide much better anatomical realism as compared to stylized phantoms which are intrinsically limited by mathematical surface equations. To address the drawbacks of these phantoms, hybrid phantoms based on non-uniform rational B-spline (NURBS) surfaces have been introduced wherein anthropomorphic flexibility and anatomic realism are both preserved. Researchers at the University of Florida have introduced a series of hybrid phantoms representing the ICRP Publication 89 reference newborn, 15 year, and adult male and female. In this study, six additional phantoms are added to the UF family of hybrid phantoms-those of the reference 1 year, 5 year and 10 year child. Head and torso CT images of patients whose ages were close to the targeted ages were obtained under approved protocols. Major organs and tissues were segmented from these images using an image processing software, 3D-DOCTOR(TM). NURBS and polygon mesh surfaces were then used to model individual organs and tissues after importing the segmented organ models to the 3D NURBS modeling software, Rhinoceros(TM). The phantoms were matched to four reference datasets: (1) standard anthropometric data, (2) reference
Deterministic linear-optics quantum computing based on a hybrid approach
International Nuclear Information System (INIS)
Lee, Seung-Woo; Jeong, Hyunseok
2014-01-01
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources
Deterministic linear-optics quantum computing based on a hybrid approach
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
A hybrid model for electricity spot prices
International Nuclear Information System (INIS)
Anderson, C.L.D.
2004-01-01
Electricity prices were highly regulated prior to the deregulation of the electric power industry. Prices were predictable, allowing generators and wholesalers to calculate their production costs and revenues. With deregulation, electricity has become the most volatile of all commodities. Electricity must be consumed as soon as it is generated due to the inability to store it in any sufficient quantity. Economic uncertainty exists because the supply of electricity cannot shift as quickly as the demand, which is highly variable. When demand increases quickly, the price must respond. Therefore, price spikes occur that are orders of magnitude higher than the base electricity price. This paper presents a robust and realistic model for spot market electricity prices used to manage risk in volatile markets. The model is a hybrid of a top down data driven method commonly used for financial applications, and a bottom up system driven method commonly used in regulated electricity markets. The advantage of the model is that it incorporates primary system drivers and demonstrates their effects on final prices. The 4 primary modules of the model are: (1) a model for forced outages, (2) a model for maintenance outages, (3) an electrical load model, and (4) a price model which combines the results of the previous 3 models. The performance of each model was tested. The forced outage model is the first of its kind to simulate the system on an aggregate basis using Weibull distributions. The overall spot price model was calibrated to, and tested with, data from the electricity market in Pennsylvania, New Jersey and Maryland. The model performed well in simulated market prices and adapted readily to changing system conditions and new electricity markets. This study examined the pricing of derivative contracts on electrical power. It also compared a range of portfolio scenarios using a Cash Flow at Risk approach
A hybrid model for electricity spot prices
Energy Technology Data Exchange (ETDEWEB)
Anderson, C.L.D.
2004-07-01
Electricity prices were highly regulated prior to the deregulation of the electric power industry. Prices were predictable, allowing generators and wholesalers to calculate their production costs and revenues. With deregulation, electricity has become the most volatile of all commodities. Electricity must be consumed as soon as it is generated due to the inability to store it in any sufficient quantity. Economic uncertainty exists because the supply of electricity cannot shift as quickly as the demand, which is highly variable. When demand increases quickly, the price must respond. Therefore, price spikes occur that are orders of magnitude higher than the base electricity price. This paper presents a robust and realistic model for spot market electricity prices used to manage risk in volatile markets. The model is a hybrid of a top down data driven method commonly used for financial applications, and a bottom up system driven method commonly used in regulated electricity markets. The advantage of the model is that it incorporates primary system drivers and demonstrates their effects on final prices. The 4 primary modules of the model are: (1) a model for forced outages, (2) a model for maintenance outages, (3) an electrical load model, and (4) a price model which combines the results of the previous 3 models. The performance of each model was tested. The forced outage model is the first of its kind to simulate the system on an aggregate basis using Weibull distributions. The overall spot price model was calibrated to, and tested with, data from the electricity market in Pennsylvania, New Jersey and Maryland. The model performed well in simulated market prices and adapted readily to changing system conditions and new electricity markets. This study examined the pricing of derivative contracts on electrical power. It also compared a range of portfolio scenarios using a Cash Flow at Risk approach.
A Hybrid Teaching and Learning Model
Juhary, Jowati Binti
This paper aims at analysing the needs for a specific teaching and learning model for the National Defence University of Malaysia (NDUM). The main argument is that whether there are differences between teaching and learning for academic component versus military component at the university. It is further argued that in order to achieve excellence, there should be one teaching and learning culture. Data were collected through interviews with military cadets. It is found that there are variations of teaching and learning strategies for academic courses, in comparison to a dominant teaching and learning style for military courses. Thus, in the interest of delivering quality education and training for students at the university, the paper argues that possibly a hybrid model for teaching and learning is fundamental in order to generate a one culture of academic and military excellence for the NDUM.
Modelling supervisory controller for hybrid power systems
Energy Technology Data Exchange (ETDEWEB)
Pereira, A; Bindner, H; Lundsager, P [Risoe National Lab., Roskilde (Denmark); Jannerup, O [Technical Univ. of Denmark, Dept. of Automation, Lyngby (Denmark)
1999-03-01
Supervisory controllers are important to achieve optimal operation of hybrid power systems. The performance and economics of such systems depend mainly on the control strategy for switching on/off components. The modular concept described in this paper is an attempt to design standard supervisory controllers that could be used in different applications, such as village power and telecommunication applications. This paper presents some basic aspects of modelling and design of modular supervisory controllers using the object-oriented modelling technique. The functional abstraction hierarchy technique is used to formulate the control requirements and identify the functions of the control system. The modular algorithm is generic and flexible enough to be used with any system configuration and several goals (different applications). The modularity includes accepting modification of system configuration and goals during operation with minor or no changes in the supervisory controller. (au)
Heart dosimetry in radiotherapy with hybrid computational phantoms
International Nuclear Information System (INIS)
Moignier, Cyril
2014-01-01
Cardiovascular diseases following radiotherapy are major secondary late effects raising questions among the scientific community, especially regarding the dose-effect relationship and confounding risk factors (chemotherapy, cholesterolemia, age at treatment, blood pressure,..). Post-radiation coronary diseases are one of the main causes of cardiac morbidity. Some approximations are made when coronary doses due to radiotherapy are estimated, especially regarding the morphology. For retrospective studies with old medical records, only radiographs are usually available with sometimes some contours made with a simulator. For recent medical records, CT scans displaying the anatomy in 3D are used for radiotherapy simulation but do not allow the coronary artery visualization due to low resolution and contrast. Currently, coronary doses are barely assessed in clinical practice, and when it is done, anatomical prior knowledge is generally used. This thesis proposes an original approach based on hybrid computational phantoms to study coronary artery doses following radiotherapy for left-side breast cancer and Hodgkin lymphoma. During the thesis, a method inserting hybrid computational phantoms in a DICOM format into the treatment planning system has been developed and validated. It has been adapted and tested in conditions where only radiographs provide anatomical information, as with old medical records for left side breast radiotherapy. The method has also been adapted to perform precise dose reconstructions to the coronary artery for patients treated for a mediastinal Hodgkin lymphoma and diagnosed with coronary stenosis through a coroscanner. A case-control study was carried out and the risk of coronary stenosis on a coronary artery segment was assessed to be multiplied by 1.049 at each additional gray on the median dose to the coronary artery segment. For recent medical records, coronary doses uncertainties related to an approach by anatomical prior knowledge
Cardiovascular dosimetry using hybrid computational phantoms after external radiotherapy
International Nuclear Information System (INIS)
Moignier, Alexandra
2014-01-01
Cardiovascular diseases following radiotherapy are major secondary late effects raising questions among the scientific community, especially regarding the dose-effect relationship and confounding risk factors (chemotherapy, cholesterolemia, age at treatment, blood pressure,..). Post-radiation coronary diseases are one of the main causes of cardiac morbidity. Some approximations are made when coronary doses due to radiotherapy are estimated, especially regarding the morphology. For retrospective studies with old medical records, only radiographs are usually available with sometimes some contours made with a simulator. For recent medical records, CT scans displaying the anatomy in 3D are used for radiotherapy simulation but do not allow the coronary artery visualization due to low resolution and contrast. Currently, coronary doses are barely assessed in clinical practice, and when it is done, anatomical prior knowledge is generally used. This thesis proposes an original approach based on hybrid computational phantoms to study coronary artery doses following radiotherapy for left-side breast cancer and Hodgkin lymphoma. During the thesis, a method inserting hybrid computational phantoms in a DICOM format into the treatment planning system has been developed and validated. It has been adapted and tested in conditions where only radiographs provide anatomical information, as with old medical records for left side breast radiotherapy. The method has also been adapted to perform precise dose reconstructions to the coronary artery for patients treated for a mediastinal Hodgkin lymphoma and diagnosed with coronary stenosis through a coro-scanner. A case-control study was carried out and the risk of coronary stenosis on a coronary artery segment was assessed to be multiplied by 1.049 at each additional gray on the median dose to the coronary artery segment. For recent medical records, coronary doses uncertainties related to an approach by anatomical prior knowledge
International Nuclear Information System (INIS)
Potter, J.M.
1985-01-01
The mathematical background for a multiport-network-solving program is described. A method for accurately numerically modeling an arbitrary, continuous, multiport transmission line is discussed. A modification to the transmission-line equations to accommodate multiple rf drives is presented. An improved model for the radio-frequency quadrupole (RFQ) accelerator that corrects previous errors is given. This model permits treating the RFQ as a true eight-port network for simplicity in interpreting the field distribution and ensures that all modes propagate at the same velocity in the high-frequency limit. The flexibility of the multiport model is illustrated by simple modifications to otherwise two-dimensional systems that permit modeling them as linear chains of multiport networks
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
Bayesian inference for hybrid discrete-continuous stochastic kinetic models
International Nuclear Information System (INIS)
Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S
2014-01-01
We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)
Computational Modeling of Space Physiology
Lewandowski, Beth E.; Griffin, Devon W.
2016-01-01
The Digital Astronaut Project (DAP), within NASAs Human Research Program, develops and implements computational modeling for use in the mitigation of human health and performance risks associated with long duration spaceflight. Over the past decade, DAP developed models to provide insights into space flight related changes to the central nervous system, cardiovascular system and the musculoskeletal system. Examples of the models and their applications include biomechanical models applied to advanced exercise device development, bone fracture risk quantification for mission planning, accident investigation, bone health standards development, and occupant protection. The International Space Station (ISS), in its role as a testing ground for long duration spaceflight, has been an important platform for obtaining human spaceflight data. DAP has used preflight, in-flight and post-flight data from short and long duration astronauts for computational model development and validation. Examples include preflight and post-flight bone mineral density data, muscle cross-sectional area, and muscle strength measurements. Results from computational modeling supplement space physiology research by informing experimental design. Using these computational models, DAP personnel can easily identify both important factors associated with a phenomenon and areas where data are lacking. This presentation will provide examples of DAP computational models, the data used in model development and validation, and applications of the model.
A Hybrid Tsunami Risk Model for Japan
Haseemkunju, A. V.; Smith, D. F.; Khater, M.; Khemici, O.; Betov, B.; Scott, J.
2014-12-01
Around the margins of the Pacific Ocean, denser oceanic plates slipping under continental plates cause subduction earthquakes generating large tsunami waves. The subducting Pacific and Philippine Sea plates create damaging interplate earthquakes followed by huge tsunami waves. It was a rupture of the Japan Trench subduction zone (JTSZ) and the resultant M9.0 Tohoku-Oki earthquake that caused the unprecedented tsunami along the Pacific coast of Japan on March 11, 2011. EQECAT's Japan Earthquake model is a fully probabilistic model which includes a seismo-tectonic model describing the geometries, magnitudes, and frequencies of all potential earthquake events; a ground motion model; and a tsunami model. Within the much larger set of all modeled earthquake events, fault rupture parameters for about 24000 stochastic and 25 historical tsunamigenic earthquake events are defined to simulate tsunami footprints using the numerical tsunami model COMCOT. A hybrid approach using COMCOT simulated tsunami waves is used to generate inundation footprints, including the impact of tides and flood defenses. Modeled tsunami waves of major historical events are validated against observed data. Modeled tsunami flood depths on 30 m grids together with tsunami vulnerability and financial models are then used to estimate insured loss in Japan from the 2011 tsunami. The primary direct report of damage from the 2011 tsunami is in terms of the number of buildings damaged by municipality in the tsunami affected area. Modeled loss in Japan from the 2011 tsunami is proportional to the number of buildings damaged. A 1000-year return period map of tsunami waves shows high hazard along the west coast of southern Honshu, on the Pacific coast of Shikoku, and on the east coast of Kyushu, primarily associated with major earthquake events on the Nankai Trough subduction zone (NTSZ). The highest tsunami hazard of more than 20m is seen on the Sanriku coast in northern Honshu, associated with the JTSZ.
Wu, Xin; Koslowski, Axel; Thiel, Walter
2012-07-10
In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.
International Nuclear Information System (INIS)
Royston, K.; Haghighat, A.; Yi, C.
2010-01-01
The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
Hybrid simulation of scatter intensity in industrial cone-beam computed tomography
International Nuclear Information System (INIS)
Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.
2009-01-01
A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.
Computational modelling in fluid mechanics
International Nuclear Information System (INIS)
Hauguel, A.
1985-01-01
The modelling of the greatest part of environmental or industrial flow problems gives very similar types of equations. The considerable increase in computing capacity over the last ten years consequently allowed numerical models of growing complexity to be processed. The varied group of computer codes presented are now a complementary tool of experimental facilities to achieve studies in the field of fluid mechanics. Several codes applied in the nuclear field (reactors, cooling towers, exchangers, plumes...) are presented among others [fr
A 'simple' hybrid model for power derivatives
International Nuclear Information System (INIS)
Lyle, Matthew R.; Elliott, Robert J.
2009-01-01
This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)
A muscle model for hybrid muscle activation
Directory of Open Access Journals (Sweden)
Klauer Christian
2015-09-01
Full Text Available To develop model-based control strategies for Functional Electrical Stimulation (FES in order to support weak voluntary muscle contractions, a hybrid model for describing joint motions induced by concurrent voluntary-and FES induced muscle activation is proposed. It is based on a Hammerstein model – as commonly used in feedback controlled FES – and exemplarily applied to describe the shoulder abduction joint angle. Main component of a Hammerstein muscle model is usually a static input nonlinearity depending on the stimulation intensity. To additionally incorporate voluntary contributions, we extended the static non-linearity by a second input describing the intensity of the voluntary contribution that is estimated by electromyography (EMG measurements – even during active FES. An Artificial Neural Network (ANN is used to describe the static input non-linearity. The output of the ANN drives a second-order linear dynamical system that describes the combined muscle activation and joint angle dynamics. The tunable parameters are adapted to the individual subject by a system identification approach using previously recorded I/O-data. The model has been validated in two healthy subjects yielding RMS values for the joint angle error of 3.56° and 3.44°, respectively.
Chaos Modelling with Computers
Indian Academy of Sciences (India)
Chaos is one of the major scientific discoveries of our times. In fact many scientists ... But there are other natural phenomena that are not predictable though ... characteristics of chaos. ... The position and velocity are all that are needed to determine the motion of a .... a system of equations that modelled the earth's weather ...
Patient-Specific Computational Modeling
Peña, Estefanía
2012-01-01
This book addresses patient-specific modeling. It integrates computational modeling, experimental procedures, imagine clinical segmentation and mesh generation with the finite element method (FEM) to solve problems in computational biomedicine and bioengineering. Specific areas of interest include cardiovascular problems, ocular and muscular systems and soft tissue modeling. Patient-specific modeling has been the subject of serious research over the last seven years and interest in the area is continually growing and this area is expected to further develop in the near future.
Computer model for ductile fracture
International Nuclear Information System (INIS)
Moran, B.; Reaugh, J. E.
1979-01-01
A computer model is described for predicting ductile fracture initiation and propagation. The computer fracture model is calibrated by simple and notched round-bar tension tests and a precracked compact tension test. The model is used to predict fracture initiation and propagation in a Charpy specimen and compare the results with experiments. The calibrated model provides a correlation between Charpy V-notch (CVN) fracture energy and any measure of fracture toughness, such as J/sub Ic/. A second simpler empirical correlation was obtained using the energy to initiate fracture in the Charpy specimen rather than total energy CVN, and compared the results with the empirical correlation of Rolfe and Novak
Trust Models in Ubiquitous Computing
DEFF Research Database (Denmark)
Nielsen, Mogens; Krukow, Karl; Sassone, Vladimiro
2008-01-01
We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.......We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models....
Hybrid discrete choice models: Gained insights versus increasing effort
International Nuclear Information System (INIS)
Mariel, Petr; Meyerhoff, Jürgen
2016-01-01
Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.
Hybrid discrete choice models: Gained insights versus increasing effort
Energy Technology Data Exchange (ETDEWEB)
Mariel, Petr, E-mail: petr.mariel@ehu.es [UPV/EHU, Economía Aplicada III, Avda. Lehendakari Aguire, 83, 48015 Bilbao (Spain); Meyerhoff, Jürgen [Institute for Landscape Architecture and Environmental Planning, Technical University of Berlin, D-10623 Berlin, Germany and The Kiel Institute for the World Economy, Duesternbrooker Weg 120, 24105 Kiel (Germany)
2016-10-15
Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. - Highlights: • The paper compares performance of a Hybrid Choice Model (HCM) and a classical Random Parameter Logit (RPL) model. • The HCM indeed provides insights regarding preference heterogeneity not gained from the RPL. • The RPL has similar predictive power as the HCM in our data. • The costs of estimating HCM seem to be justified when learning more on taste heterogeneity is a major study objective.
A hybrid method for the parallel computation of Green's functions
International Nuclear Information System (INIS)
Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric
2009-01-01
Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2013-07-01
While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)
Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis
2009-04-01
Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.
Trust models in ubiquitous computing.
Krukow, Karl; Nielsen, Mogens; Sassone, Vladimiro
2008-10-28
We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models.
Ch. 33 Modeling: Computational Thermodynamics
International Nuclear Information System (INIS)
Besmann, Theodore M.
2012-01-01
This chapter considers methods and techniques for computational modeling for nuclear materials with a focus on fuels. The basic concepts for chemical thermodynamics are described and various current models for complex crystalline and liquid phases are illustrated. Also included are descriptions of available databases for use in chemical thermodynamic studies and commercial codes for performing complex equilibrium calculations.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
A hybrid hydrostatic and non-hydrostatic numerical model for shallow flow simulations
Zhang, Jingxin; Liang, Dongfang; Liu, Hua
2018-05-01
Hydrodynamics of geophysical flows in oceanic shelves, estuaries, and rivers, are often studied by solving shallow water model equations. Although hydrostatic models are accurate and cost efficient for many natural flows, there are situations where the hydrostatic assumption is invalid, whereby a fully hydrodynamic model is necessary to increase simulation accuracy. There is a growing concern about the decrease of the computational cost of non-hydrostatic pressure models to improve the range of their applications in large-scale flows with complex geometries. This study describes a hybrid hydrostatic and non-hydrostatic model to increase the efficiency of simulating shallow water flows. The basic numerical model is a three-dimensional hydrostatic model solved by the finite volume method (FVM) applied to unstructured grids. Herein, a second-order total variation diminishing (TVD) scheme is adopted. Using a predictor-corrector method to calculate the non-hydrostatic pressure, we extended the hydrostatic model to a fully hydrodynamic model. By localising the computational domain in the corrector step for non-hydrostatic pressure calculations, a hybrid model was developed. There was no prior special treatment on mode switching, and the developed numerical codes were highly efficient and robust. The hybrid model is applicable to the simulation of shallow flows when non-hydrostatic pressure is predominant only in the local domain. Beyond the non-hydrostatic domain, the hydrostatic model is still accurate. The applicability of the hybrid method was validated using several study cases.
Modeling integrated cellular machinery using hybrid Petri-Boolean networks.
Directory of Open Access Journals (Sweden)
Natalie Berestovsky
Full Text Available The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them
Probabilistic modelling and analysis of stand-alone hybrid power systems
International Nuclear Information System (INIS)
Lujano-Rojas, Juan M.; Dufo-López, Rodolfo; Bernal-Agustín, José L.
2013-01-01
As a part of the Hybrid Intelligent Algorithm, a model based on an ANN (artificial neural network) has been proposed in this paper to represent hybrid system behaviour considering the uncertainty related to wind speed and solar radiation, battery bank lifetime, and fuel prices. The Hybrid Intelligent Algorithm suggests a combination of probabilistic analysis based on a Monte Carlo simulation approach and artificial neural network training embedded in a genetic algorithm optimisation model. The installation of a typical hybrid system was analysed. Probabilistic analysis was used to generate an input–output dataset of 519 samples that was later used to train the ANNs to reduce the computational effort required. The generalisation ability of the ANNs was measured in terms of RMSE (Root Mean Square Error), MBE (Mean Bias Error), MAE (Mean Absolute Error), and R-squared estimators using another data group of 200 samples. The results obtained from the estimation of the expected energy not supplied, the probability of a determined reliability level, and the estimation of expected value of net present cost show that the presented model is able to represent the main characteristics of a typical hybrid power system under uncertain operating conditions. - Highlights: • This paper presents a probabilistic model for stand-alone hybrid power system. • The model considers the main sources of uncertainty related to renewable resources. • The Hybrid Intelligent Algorithm has been applied to represent hybrid system behaviour. • The installation of a typical hybrid system was analysed. • The results obtained from the study case validate the presented model
Computer Modelling of Dynamic Processes
Directory of Open Access Journals (Sweden)
B. Rybakin
2000-10-01
Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
DEFF Research Database (Denmark)
Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan
2001-01-01
The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...
Recent developments on the UrQMD hybrid model
Energy Technology Data Exchange (ETDEWEB)
Steinheimer, J., E-mail: steinheimer@th.physik.uni-frankfurt.de; Nahrgang, M., E-mail: nahrgang@th.physik.uni-frankfurt.de; Gerhard, J., E-mail: jochen.gerhard@compeng.uni-frankfurt.de; Schramm, S., E-mail: schramm@fias.uni-frankfurt.de; Bleicher, M., E-mail: bleicher@fias.uni-frankfurt.de [Frankfurt Institute for Advanced Studies (FIAS) (Germany)
2012-06-15
We present recent results from the UrQMD hybrid approach investigating the influence of a deconfinement phase transition on the dynamics of hot and dense nuclear matter. In the hydrodynamic stage an equation of state that incorporates a critical end-point (CEP) in line with lattice data is used. The equation of state describes chiral restoration as well as the deconfinement phase transition. We compare the results from this new equation of state to results obtained by applying a hadron resonance gas equation of state, focusing on bulk observables. Furthermore we will discuss future improvements of the hydrodynamic model. This includes the formulation of chiral fluid dynamics to be able to study the effects of a chiral critical point as well as considerable improvements in terms of computational time which would open up possibilities for observables that require high statistics.
Hybrid continuum-coarse-grained modeling of erythrocytes
Lyu, Jinming; Chen, Paul G.; Boedec, Gwenn; Leonetti, Marc; Jaeger, Marc
2018-06-01
The red blood cell (RBC) membrane is a composite structure, consisting of a phospholipid bilayer and an underlying membrane-associated cytoskeleton. Both continuum and particle-based coarse-grained RBC models make use of a set of vertices connected by edges to represent the RBC membrane, which can be seen as a triangular surface mesh for the former and a spring network for the latter. Here, we present a modeling approach combining an existing continuum vesicle model with a coarse-grained model for the cytoskeleton. Compared to other two-component approaches, our method relies on only one mesh, representing the cytoskeleton, whose velocity in the tangential direction of the membrane may be different from that of the lipid bilayer. The finitely extensible nonlinear elastic (FENE) spring force law in combination with a repulsive force defined as a power function (POW), called FENE-POW, is used to describe the elastic properties of the RBC membrane. The mechanical interaction between the lipid bilayer and the cytoskeleton is explicitly computed and incorporated into the vesicle model. Our model includes the fundamental mechanical properties of the RBC membrane, namely fluidity and bending rigidity of the lipid bilayer, and shear elasticity of the cytoskeleton while maintaining surface-area and volume conservation constraint. We present three simulation examples to demonstrate the effectiveness of this hybrid continuum-coarse-grained model for the study of RBCs in fluid flows.
Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques
Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.
2015-01-01
AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were
Model predictive control of hybrid systems : stability and robustness
Lazar, M.
2006-01-01
This thesis considers the stabilization and the robust stabilization of certain classes of hybrid systems using model predictive control. Hybrid systems represent a broad class of dynamical systems in which discrete behavior (usually described by a finite state machine) and continuous behavior
Transient Model of Hybrid Concentrated Photovoltaic with Thermoelectric Generator
DEFF Research Database (Denmark)
Mahmoudi Nezhad, Sajjad; Qing, Shaowei; Rezaniakolaei, Alireza
2017-01-01
Transient performance of a concentrated photovoltaic thermoelectric (CPV-TEG) hybrid system is modeled and investigated. A heat sink with water, as the working fluid has been implemented as the cold reservoir of the hybrid system to harvest the heat loss from CPV cell and to increase the efficiency...
Phoneme-based speech segmentation using hybrid soft computing framework
Sarma, Mousmita
2014-01-01
The book discusses intelligent system design using soft computing and similar systems and their interdisciplinary applications. It also focuses on the recent trends to use soft computing as a versatile tool for designing a host of decision support systems.
Climate Modeling Computing Needs Assessment
Petraska, K. E.; McCabe, J. D.
2011-12-01
This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.
Computer Profiling Based Model for Investigation
Neeraj Choudhary; Nikhil Kumar Singh; Parmalik Singh
2011-01-01
Computer profiling is used for computer forensic analysis, and proposes and elaborates on a novel model for use in computer profiling, the computer profiling object model. The computer profiling object model is an information model which models a computer as objects with various attributes and inter-relationships. These together provide the information necessary for a human investigator or an automated reasoning engine to make judgments as to the probable usage and evidentiary value of a comp...
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
Energy Technology Data Exchange (ETDEWEB)
Felice, Maria V., E-mail: maria.felice@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol, U.K. and NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom); Velichko, Alexander, E-mail: p.wilcox@bristol.ac.uk; Wilcox, Paul D., E-mail: p.wilcox@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol (United Kingdom); Barden, Tim; Dunhill, Tony [NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom)
2015-03-31
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home
Directory of Open Access Journals (Sweden)
Gila Cohen Zilka
2016-06-01
Full Text Available Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school; and (b the digital divide in literacy skills. In the present study we examined the degree of effectiveness of receiving a desktop or hybrid computer for the home in reducing the digital divide among children of low socio-economic status aged 8-12 from various localities across Israel. The sample consisted of 1,248 respondents assessed in two measurements. As part of the mixed-method study, 128 children were also interviewed. Findings indicate that after the children received desktop or hybrid computers, changes occurred in their frequency of access, mobility, and computer literacy. Differences were found between the groups: hybrid computers reduce disparities and promote work with the computer and surfing the Internet more than do desktop computers. Narrowing the digital divide for this age group has many implications for the acquisition of skills and study habits, and consequently, for the realization of individual potential. The children spoke about self improvement as a result of exposure to the digital environment, about a sense of empowerment and of improvement in their advantage in the social fabric. Many children expressed a desire to continue their education and expand their knowledge of computer applications, the use of software, of games, and more. Therefore, if there is no computer in the home and it is necessary to decide between a desktop and a hybrid computer, a hybrid computer is preferable.
A hybrid method for the computation of quasi-3D seismograms.
Masson, Yder; Romanowicz, Barbara
2013-04-01
The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these
Getting computer models to communicate
International Nuclear Information System (INIS)
Caremoli, Ch.; Erhard, P.
1999-01-01
Today's computers have the processing power to deliver detailed and global simulations of complex industrial processes such as the operation of a nuclear reactor core. So should we be producing new, global numerical models to take full advantage of this new-found power? If so, it would be a long-term job. There is, however, another solution; to couple the existing validated numerical models together so that they work as one. (authors)
Wu, Guang; Dong, Zuomin
2017-09-01
Hybrid electric vehicles are widely accepted as a promising short to mid-term technical solution due to noticeably improved efficiency and lower emissions at competitive costs. In recent years, various hybrid powertrain systems were proposed and implemented based on different types of conventional transmission. Power-split system, including Toyota Hybrid System and Ford Hybrid System, are well-known examples. However, their relatively low torque capacity, and the drive of alternative and more advanced designs encouraged other innovative hybrid system designs. In this work, a new type of hybrid powertrain system based hybridized automated manual transmission (HAMT) is proposed. By using the concept of torque gap filler (TGF), this new hybrid powertrain type has the potential to overcome issue of torque gap during gearshift. The HAMT design (patent pending) is described in details, from gear layout and design of gear ratios (EV mode and HEV mode) to torque paths at different gears. As an analytical tool, mutli-body model of vehicle equipped with this HAMT was built to analyze powertrain dynamics at various steady and transient modes. A gearshift was decomposed and analyzed based basic modes. Furthermore, a Simulink-SimDriveline hybrid vehicle model was built for the new transmission, driveline and vehicle modular. Control strategy has also been built to harmonically coordinate different powertrain components to realize TGF function. A vehicle launch simulation test has been completed under 30% of accelerator pedal position to reveal details during gearshift. Simulation results showed that this HAMT can eliminate most torque gap that has been persistent issue of traditional AMT, improving both drivability and performance. This work demonstrated a new type of transmission that features high torque capacity, high efficiency and improved drivability.
Hybrid Computational Simulation and Study of Terahertz Pulsed Photoconductive Antennas
Emadi, R.; Barani, N.; Safian, R.; Nezhad, A. Zeidaabadi
2016-11-01
A photoconductive antenna (PCA) has been numerically investigated in the terahertz (THz) frequency band based on a hybrid simulation method. This hybrid method utilizes an optoelectronic solver, Silvaco TCAD, and a full-wave electromagnetic solver, CST. The optoelectronic solver is used to find the accurate THz photocurrent by considering realistic material parameters. Performance of photoconductive antennas and temporal behavior of the excited photocurrent for various active region geometries such as bare-gap electrode, interdigitated electrodes, and tip-to-tip rectangular electrodes are investigated. Moreover, investigations have been done on the center of the laser illumination on the substrate, substrate carrier lifetime, and diffusion photocurrent associated with the carriers temperature, to achieve efficient and accurate photocurrent. Finally, using the full-wave electromagnetic solver and the calculated photocurrent obtained from the optoelectronic solver, electromagnetic radiation of the antenna and its associated detected THz signal are calculated and compared with a measurement reference for verification.
Computational social dynamic modeling of group recruitment.
Energy Technology Data Exchange (ETDEWEB)
Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.
2004-01-01
The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Computational Modeling in Liver Surgery
Directory of Open Access Journals (Sweden)
Bruno Christ
2017-11-01
Full Text Available The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery.
Bond graph model-based fault diagnosis of hybrid systems
Borutzky, Wolfgang
2015-01-01
This book presents a bond graph model-based approach to fault diagnosis in mechatronic systems appropriately represented by a hybrid model. The book begins by giving a survey of the fundamentals of fault diagnosis and failure prognosis, then recalls state-of-art developments referring to latest publications, and goes on to discuss various bond graph representations of hybrid system models, equations formulation for switched systems, and simulation of their dynamic behavior. The structured text: • focuses on bond graph model-based fault detection and isolation in hybrid systems; • addresses isolation of multiple parametric faults in hybrid systems; • considers system mode identification; • provides a number of elaborated case studies that consider fault scenarios for switched power electronic systems commonly used in a variety of applications; and • indicates that bond graph modelling can also be used for failure prognosis. In order to facilitate the understanding of fault diagnosis and the presented...
Achieving a hybrid brain-computer interface with tactile selective attention and motor imagery
Ahn, Sangtae; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan
2014-12-01
Objective. We propose a new hybrid brain-computer interface (BCI) system that integrates two different EEG tasks: tactile selective attention (TSA) using a vibro-tactile stimulator on the left/right finger and motor imagery (MI) of left/right hand movement. Event-related desynchronization (ERD) from the MI task and steady-state somatosensory evoked potential (SSSEP) from the TSA task are retrieved and combined into two hybrid senses. Approach. One hybrid approach is to measure two tasks simultaneously; the features of each task are combined for testing. Another hybrid approach is to measure two tasks consecutively (TSA first and MI next) using only MI features. For comparison with the hybrid approaches, the TSA and MI tasks are measured independently. Main results. Using a total of 16 subject datasets, we analyzed the BCI classification performance for MI, TSA and two hybrid approaches in a comparative manner; we found that the consecutive hybrid approach outperformed the others, yielding about a 10% improvement in classification accuracy relative to MI alone. It is understood that TSA may play a crucial role as a prestimulus in that it helps to generate earlier ERD prior to MI and thus sustains ERD longer and to a stronger degree; this ERD may give more discriminative information than ERD in MI alone. Significance. Overall, our proposed consecutive hybrid approach is very promising for the development of advanced BCI systems.
Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero
2014-10-01
Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.
Computational aspects of premixing modelling
Energy Technology Data Exchange (ETDEWEB)
Fletcher, D.F. [Sydney Univ., NSW (Australia). Dept. of Chemical Engineering; Witt, P.J.
1998-01-01
In the steam explosion research field there is currently considerable effort being devoted to the modelling of premixing. Practically all models are based on the multiphase flow equations which treat the mixture as an interpenetrating continuum. Solution of these equations is non-trivial and a wide range of solution procedures are in use. This paper addresses some numerical aspects of this problem. In particular, we examine the effect of the differencing scheme for the convective terms and show that use of hybrid differencing can cause qualitatively wrong solutions in some situations. Calculations are performed for the Oxford tests, the BNL tests, a MAGICO test and to investigate various sensitivities of the solution. In addition, we show that use of a staggered grid can result in a significant error which leads to poor predictions of `melt` front motion. A correction is given which leads to excellent convergence to the analytic solution. Finally, we discuss the issues facing premixing model developers and highlight the fact that model validation is hampered more by the complexity of the process than by numerical issues. (author)
Optimization of a Continuous Hybrid Impeller Mixer via Computational Fluid Dynamics
Directory of Open Access Journals (Sweden)
N. Othman
2014-01-01
Full Text Available This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD using computational fluid dynamics (CFD. In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT. Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.
Optimization of a continuous hybrid impeller mixer via computational fluid dynamics.
Othman, N; Kamarudin, S K; Takriff, M S; Rosli, M I; Engku Chik, E M F; Meor Adnan, M A K
2014-01-01
This paper presents the preliminary steps required for conducting experiments to obtain the optimal operating conditions of a hybrid impeller mixer and to determine the residence time distribution (RTD) using computational fluid dynamics (CFD). In this paper, impeller speed and clearance parameters are examined. The hybrid impeller mixer consists of a single Rushton turbine mounted above a single pitched blade turbine (PBT). Four impeller speeds, 50, 100, 150, and 200 rpm, and four impeller clearances, 25, 50, 75, and 100 mm, were the operation variables used in this study. CFD was utilized to initially screen the parameter ranges to reduce the number of actual experiments needed. Afterward, the residence time distribution (RTD) was determined using the respective parameters. Finally, the Fluent-predicted RTD and the experimentally measured RTD were compared. The CFD investigations revealed that an impeller speed of 50 rpm and an impeller clearance of 25 mm were not viable for experimental investigations and were thus eliminated from further analyses. The determination of RTD using a k-ε turbulence model was performed using CFD techniques. The multiple reference frame (MRF) was implemented and a steady state was initially achieved followed by a transient condition for RTD determination.
A Structural Model Decomposition Framework for Hybrid Systems Diagnosis
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2015-01-01
Nowadays, a large number of practical systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete modes of behavior, each defined by a set of continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task very challenging. In this work, we present a new modeling and diagnosis framework for hybrid systems. Models are composed from sets of user-defined components using a compositional modeling approach. Submodels for residual generation are then generated for a given mode, and reconfigured efficiently when the mode changes. Efficient reconfiguration is established by exploiting causality information within the hybrid system models. The submodels can then be used for fault diagnosis based on residual generation and analysis. We demonstrate the efficient causality reassignment, submodel reconfiguration, and residual generation for fault diagnosis using an electrical circuit case study.
Model for optimum design of standalone hybrid renewable energy ...
African Journals Online (AJOL)
An optimization model for the design of a hybrid renewable energy microgrid ... and increasing the rated power of the wind energy conversion system (WECS) or solar ... a 70% reduction in gas emissions and an 80% reduction in energy costs.
Hybrid Modelling of Individual Movement and Collective Behaviour
Franz, Benjamin
2013-01-01
Mathematical models of dispersal in biological systems are often written in terms of partial differential equations (PDEs) which describe the time evolution of population-level variables (concentrations, densities). A more detailed modelling approach is given by individual-based (agent-based) models which describe the behaviour of each organism. In recent years, an intermediate modelling methodology - hybrid modelling - has been applied to a number of biological systems. These hybrid models couple an individual-based description of cells/animals with a PDE-model of their environment. In this chapter, we overview hybrid models in the literature with the focus on the mathematical challenges of this modelling approach. The detailed analysis is presented using the example of chemotaxis, where cells move according to extracellular chemicals that can be altered by the cells themselves. In this case, individual-based models of cells are coupled with PDEs for extracellular chemical signals. Travelling waves in these hybrid models are investigated. In particular, we show that in contrary to the PDEs, hybrid chemotaxis models only develop a transient travelling wave. © 2013 Springer-Verlag Berlin Heidelberg.
Parallel computing in enterprise modeling.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
2008-08-01
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.
Cosmic logic: a computational model
International Nuclear Information System (INIS)
Vanchurin, Vitaly
2016-01-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps
Study of an analog/logic processor for the design of an auto patch hybrid computer
International Nuclear Information System (INIS)
Koched, Hassen
1976-01-01
This paper presents the experimental study of an analog multiprocessor designed at SES/CEN-Saclay. An application of such a device as a basic component of an auto-patch hybrid computer is presented. First, the description of the processor, and a presentation of the theoretical concepts which governed the design of the processor are given. Experiments on an hybrid computer are then presented. Finally, different systems of automatic patching are presented, and conveniently modified, for the use of such a processor. (author) [fr
Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)
2014-05-15
In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow
Energy Technology Data Exchange (ETDEWEB)
Moignier, Alexandra, E-mail: alexandra.moignier@irsn.fr [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Derreumaux, Sylvie; Broggio, David; Beurrier, Julien [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Chea, Michel; Boisserie, Gilbert [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France); Franck, Didier; Aubert, Bernard [Institut de Radioprotection et de Surete Nucleaire, Fontenay-aux-Roses (France); Mazeron, Jean-Jacques [Groupe Hospitalier Pitie Salpetriere, Service de Radiotherapie, Paris (France)
2013-02-01
Purpose: Current retrospective cardiovascular dosimetry studies are based on a representative patient or simple mathematic phantoms. Here, a process of patient modeling was developed to personalize the anatomy of the thorax and to include a heart model with coronary arteries. Methods and Materials: The patient models were hybrid computational phantoms (HCPs) with an inserted detailed heart model. A computed tomography (CT) acquisition (pseudo-CT) was derived from HCP and imported into a treatment planning system where treatment conditions were reproduced. Six current patients were selected: 3 were modeled from their CT images (A patients) and the others were modelled from 2 orthogonal radiographs (B patients). The method performance and limitation were investigated by quantitative comparison between the initial CT and the pseudo-CT, namely, the morphology and the dose calculation were compared. For the B patients, a comparison with 2 kinds of representative patients was also conducted. Finally, dose assessment was focused on the whole coronary artery tree and the left anterior descending coronary. Results: When 3-dimensional anatomic information was available, the dose calculations performed on the initial CT and the pseudo-CT were in good agreement. For the B patients, comparison of doses derived from HCP and representative patients showed that the HCP doses were either better or equivalent. In the left breast radiation therapy context and for the studied cases, coronary mean doses were at least 5-fold higher than heart mean doses. Conclusions: For retrospective dose studies, it is suggested that HCP offers a better surrogate, in terms of dose accuracy, than representative patients. The use of a detailed heart model eliminates the problem of identifying the coronaries on the patient's CT.
Minimal models of multidimensional computations.
Directory of Open Access Journals (Sweden)
Jeffrey D Fitzgerald
2011-03-01
Full Text Available The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.
Directory of Open Access Journals (Sweden)
hamid reza bazi
2017-12-01
Full Text Available Cloud computing is a new technology that considerably helps Higher Education Institutions (HEIs to develop and create competitive advantage with inherent characteristics such as flexibility, scalability, accessibility, reliability, fault tolerant and economic efficiency. Due to the numerous advantages of cloud computing, and in order to take advantage of cloud computing infrastructure, services of universities and HEIs need to migrate to the cloud. However, this transition involves many challenges, one of which is lack or shortage of appropriate architecture for migration to the technology. Using a reliable architecture for migration ensures managers to mitigate risks in the cloud computing technology. Therefore, organizations always search for suitable cloud computing architecture. In previous studies, these important features have received less attention and have not been achieved in a comprehensive way. The aim of this study is to use a meta-synthesis method for the first time to analyze the previously published studies and to suggest appropriate hybrid cloud migration architecture (IUHEC. We reviewed many papers from relevant journals and conference proceedings. The concepts extracted from these papers are classified to related categories and sub-categories. Then, we developed our proposed hybrid architecture based on these concepts and categories. The proposed architecture was validated by a panel of experts and Lawshe’s model was used to determine the content validity. Due to its innovative yet user-friendly nature, comprehensiveness, and high security, this architecture can help HEIs have an effective migration to cloud computing environment.
A hybrid Scatter/Transform cloaking model
Directory of Open Access Journals (Sweden)
Gad Licht
2015-01-01
Full Text Available A new Scatter/Transform cloak is developed that combines the light bending of refraction characteristic of a Transform cloak with the scatter cancellation characteristic of a Scatter cloak. The hybrid cloak incorporates both Transform’s variable index of refraction with modified linear intrusions to maximize the Scatter cloak effect. Scatter/Transform improved the scattering cross-section of cloaking in a 2-dimensional space to 51.7% compared to only 39.6% or 45.1% respectively with either Scatter or Transform alone. Metamaterials developed with characteristics based on the new ST hybrid cloak will exhibit superior cloaking capabilities.
Computational Models of Rock Failure
May, Dave A.; Spiegelman, Marc
2017-04-01
Practitioners in computational geodynamics, as per many other branches of applied science, typically do not analyse the underlying PDE's being solved in order to establish the existence or uniqueness of solutions. Rather, such proofs are left to the mathematicians, and all too frequently these results lag far behind (in time) the applied research being conducted, are often unintelligible to the non-specialist, are buried in journals applied scientists simply do not read, or simply have not been proven. As practitioners, we are by definition pragmatic. Thus, rather than first analysing our PDE's, we first attempt to find approximate solutions by throwing all our computational methods and machinery at the given problem and hoping for the best. Typically this approach leads to a satisfactory outcome. Usually it is only if the numerical solutions "look odd" that we start delving deeper into the math. In this presentation I summarise our findings in relation to using pressure dependent (Drucker-Prager type) flow laws in a simplified model of continental extension in which the material is assumed to be an incompressible, highly viscous fluid. Such assumptions represent the current mainstream adopted in computational studies of mantle and lithosphere deformation within our community. In short, we conclude that for the parameter range of cohesion and friction angle relevant to studying rocks, the incompressibility constraint combined with a Drucker-Prager flow law can result in problems which have no solution. This is proven by a 1D analytic model and convincingly demonstrated by 2D numerical simulations. To date, we do not have a robust "fix" for this fundamental problem. The intent of this submission is to highlight the importance of simple analytic models, highlight some of the dangers / risks of interpreting numerical solutions without understanding the properties of the PDE we solved, and lastly to stimulate discussions to develop an improved computational model of
Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO
Chenhall, Jeffrey; Moses, Gregory
2017-10-01
The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.
Study on driver model for hybrid truck based on driving simulator experimental results
Directory of Open Access Journals (Sweden)
Dam Hoang Phuc
2018-04-01
Full Text Available In this paper, a proposed car-following driver model taking into account some features of both the compensatory and anticipatory model representing the human pedal operation has been verified by driving simulator experiments with several real drivers. The comparison between computer simulations performed by determined model parameters with the experimental results confirm the correctness of this mathematical driver model and identified model parameters. Then the driver model is joined to a hybrid vehicle dynamics model and the moderate car following maneuver simulations with various driver parameters are conducted to investigate influences of driver parameters on vehicle dynamics response and fuel economy. Finally, major driver parameters involved in the longitudinal control of drivers are clarified. Keywords: Driver model, Driver-vehicle closed-loop system, Car Following, Driving simulator/hybrid electric vehicle (B1
Multi-level and hybrid modelling approaches for systems biology.
Bardini, R; Politano, G; Benso, A; Di Carlo, S
2017-01-01
During the last decades, high-throughput techniques allowed for the extraction of a huge amount of data from biological systems, unveiling more of their underling complexity. Biological systems encompass a wide range of space and time scales, functioning according to flexible hierarchies of mechanisms making an intertwined and dynamic interplay of regulations. This becomes particularly evident in processes such as ontogenesis, where regulative assets change according to process context and timing, making structural phenotype and architectural complexities emerge from a single cell, through local interactions. The information collected from biological systems are naturally organized according to the functional levels composing the system itself. In systems biology, biological information often comes from overlapping but different scientific domains, each one having its own way of representing phenomena under study. That is, the different parts of the system to be modelled may be described with different formalisms. For a model to have improved accuracy and capability for making a good knowledge base, it is good to comprise different system levels, suitably handling the relative formalisms. Models which are both multi-level and hybrid satisfy both these requirements, making a very useful tool in computational systems biology. This paper reviews some of the main contributions in this field.
Simulation of hybrid vehicle propulsion with an advanced battery model
Energy Technology Data Exchange (ETDEWEB)
Nallabolu, S.; Kostetzer, L.; Rudnyi, E. [CADFEM GmbH, Grafing (Germany); Geppert, M.; Quinger, D. [LION Smart GmbH, Frieding (Germany)
2011-07-01
In the recent years there has been observed an increasing concern about global warming and greenhouse gas emissions. In addition to the environmental issues the predicted scarcity of oil supplies and the dramatic increase in oil price puts new demands on vehicle design. As a result energy efficiency and reduced emission have become one of main selling point for automobiles. Hybrid electric vehicles (HEV) have therefore become an interesting technology for the governments and automotive industries. HEV are more complicated compared to conventional vehicles due to the fact that these vehicles contain more electrical components such as electric machines, power electronics, electronic continuously variable transmissions (CVT), and embedded powertrain controllers. Advanced energy storage devices and energy converters, such as Li-ion batteries, ultracapacitors, and fuel cells are also considered. A detailed vehicle model used for an energy flow analysis and vehicle performance simulation is necessary. Computer simulation is indispensible to facilitate the examination of the vast hybrid electric vehicle design space with the aim to predict the vehicle performance over driving profiles, estimate fuel consumption and the pollution emissions. There are various types of mathematical models and simulators available to perform system simulation of vehicle propulsion. One of the standard methods to model the complete vehicle powertrain is ''backward quasistatic modeling''. In this method vehicle subsystems are defined based on experiential models in the form of look-up tables and efficiency maps. The interaction between adjacent subsystems of the vehicle is defined through the amount of power flow. Modeling the vehicle subsystems like motor, engine, gearbox and battery is under this technique is based on block diagrams. The vehicle model is applied in two case studies to evaluate the vehicle performance and fuel consumption. In the first case study the affect
Business model elements impacting cloud computing adoption
DEFF Research Database (Denmark)
Bogataj, Kristina; Pucihar, Andreja; Sudzina, Frantisek
The paper presents a proposed research framework for identification of business model elements impacting Cloud Computing Adoption. We provide a definition of main Cloud Computing characteristics, discuss previous findings on factors impacting Cloud Computing Adoption, and investigate technology a...
Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models
Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris
2015-11-01
Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network
A hybrid method for the parallel computation of Green's functions
DEFF Research Database (Denmark)
Petersen, Dan Erik; Li, Song; Stokbro, Kurt
2009-01-01
of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....
Superconductivity in the periodic Anderson model with anisotropic hybridization
International Nuclear Information System (INIS)
Sarasua, L.G.; Continentino, Mucio A.
2003-01-01
In this work we study superconductivity in the periodic Anderson model with both on-site and intersite hybridization, including the interband Coulomb repulsion. We show that the presence of the intersite hybridization together with the on-site hybridization significantly affects the superconducting properties of the system. The symmetry of the hybridization has a strong influence in the symmetry of the superconducting order parameter of the ground state. The interband Coulomb repulsion may increase or decrease the superconducting critical temperature at small values of this interaction, while is detrimental to superconductivity for strong values. We show that the present model can give rise to positive or negative values of dT c /dP, depending on the values of the system parameters
Computational Modeling in Tissue Engineering
2013-01-01
One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in: (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...
A Cost–Effective Computer-Based, Hybrid Motorised and Gravity ...
African Journals Online (AJOL)
A Cost–Effective Computer-Based, Hybrid Motorised and Gravity-Driven Material Handling System for the Mauritian Apparel Industry. ... Thus, many companies are investing significantly in a Research & Development department in order to design new techniques to improve worker's efficiency, and to decrease the amount ...
Computation of hybrid static potentials in SU(3 lattice gauge theory
Directory of Open Access Journals (Sweden)
Reisinger Christian
2018-01-01
Full Text Available We compute hybrid static potentials in SU(3 lattice gauge theory. We present a method to automatically generate a large set of suitable creation operators with defined quantum numbers from elementary building blocks. We show preliminary results for several channels and discuss, which structures of the gluonic flux tube seem to be realized by the ground states in these channels.
Fluid Survival Tool: A Model Checker for Hybrid Petri Nets
Postema, Björn Frits; Remke, Anne Katharina Ingrid; Haverkort, Boudewijn R.H.M.; Ghasemieh, Hamed
2014-01-01
Recently, algorithms for model checking Stochastic Time Logic (STL) on Hybrid Petri nets with a single general one-shot transition (HPNG) have been introduced. This paper presents a tool for model checking HPNG models against STL formulas. A graphical user interface (GUI) not only helps to
Nuclear Hybrid Energy System Model Stability Testing
Energy Technology Data Exchange (ETDEWEB)
Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-04-01
A Nuclear Hybrid Energy System (NHES) uses a nuclear reactor as the basic power generation unit, and the power generated is used by multiple customers as combinations of thermal power or electrical power. The definition and architecture of a particular NHES can be adapted based on the needs and opportunities of different localities and markets. For example, locations in need of potable water may be best served by coupling a desalination plant to the NHES. Similarly, a location near oil refineries may have a need for emission-free hydrogen production. Using the flexible, multi-domain capabilities of Modelica, Argonne National Laboratory, Idaho National Laboratory, and Oak Ridge National Laboratory are investigating the dynamics (e.g., thermal hydraulics and electrical generation/consumption) and cost of a hybrid system. This paper examines the NHES work underway, emphasizing the control system developed for individual subsystems and the overall supervisory control system.
Hybrid Modeling and Optimization of Yogurt Starter Culture Continuous Fermentation
Directory of Open Access Journals (Sweden)
Silviya Popova
2009-10-01
Full Text Available The present paper presents a hybrid model of yogurt starter mixed culture fermentation. The main nonlinearities within a classical structure of continuous process model are replaced by neural networks. The new hybrid model accounts for the dependence of the two microorganisms' kinetics from the on-line measured characteristics of the culture medium - pH. Then the model was used further for calculation of the optimal time profile of pH. The obtained results are with agreement with the experimental once.
Mechanisms Underlying Mammalian Hybrid Sterility in Two Feline Interspecies Models.
Davis, Brian W; Seabury, Christopher M; Brashear, Wesley A; Li, Gang; Roelke-Parker, Melody; Murphy, William J
2015-10-01
The phenomenon of male sterility in interspecies hybrids has been observed for over a century, however, few genes influencing this recurrent phenotype have been identified. Genetic investigations have been primarily limited to a small number of model organisms, thus limiting our understanding of the underlying molecular basis of this well-documented "rule of speciation." We utilized two interspecies hybrid cat breeds in a genome-wide association study employing the Illumina 63 K single-nucleotide polymorphism array. Collectively, we identified eight autosomal genes/gene regions underlying associations with hybrid male sterility (HMS) involved in the function of the blood-testis barrier, gamete structural development, and transcriptional regulation. We also identified several candidate hybrid sterility regions on the X chromosome, with most residing in close proximity to complex duplicated regions. Differential gene expression analyses revealed significant chromosome-wide upregulation of X chromosome transcripts in testes of sterile hybrids, which were enriched for genes involved in chromatin regulation of gene expression. Our expression results parallel those reported in Mus hybrids, supporting the "Large X-Effect" in mammalian HMS and the potential epigenetic basis for this phenomenon. These results support the value of the interspecies feline model as a powerful tool for comparison to rodent models of HMS, demonstrating unique aspects and potential commonalities that underpin mammalian reproductive isolation. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Chandana Kodiweera
2016-06-01
Full Text Available This article provides NODDI diffusion metrics in the brains of 52 healthy participants and computer simulation data to support compatibility of hybrid diffusion imaging (HYDI, “Hybrid diffusion imaging” [1] acquisition scheme in fitting neurite orientation dispersion and density imaging (NODDI model, “NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain” [2]. HYDI is an extremely versatile diffusion magnetic resonance imaging (dMRI technique that enables various analyzes methods using a single diffusion dataset. One of the diffusion data analysis methods is the NODDI computation, which models the brain tissue with three compartments: fast isotropic diffusion (e.g., cerebrospinal fluid, anisotropic hindered diffusion (e.g., extracellular space, and anisotropic restricted diffusion (e.g., intracellular space. The NODDI model produces microstructural metrics in the developing brain, aging brain or human brain with neurologic disorders. The first dataset provided here are the means and standard deviations of NODDI metrics in 48 white matter region-of-interest (ROI averaging across 52 healthy participants. The second dataset provided here is the computer simulation with initial conditions guided by the first dataset as inputs and gold standard for model fitting. The computer simulation data provide a direct comparison of NODDI indices computed from the HYDI acquisition [1] to the NODDI indices computed from the originally proposed acquisition [2]. These data are related to the accompanying research article “Age Effects and Sex Differences in Human Brain White Matter of Young to Middle-Aged Adults: A DTI, NODDI, and q-Space Study” [3].
Hybrid modelling framework by using mathematics-based and information-based methods
International Nuclear Information System (INIS)
Ghaboussi, J; Kim, J; Elnashai, A
2010-01-01
Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.
Prediction of monthly regional groundwater levels through hybrid soft-computing techniques
Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng
2016-10-01
Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.
Opportunity for Realizing Ideal Computing System using Cloud Computing Model
Sreeramana Aithal; Vaikunth Pai T
2017-01-01
An ideal computing system is a computing system with ideal characteristics. The major components and their performance characteristics of such hypothetical system can be studied as a model with predicted input, output, system and environmental characteristics using the identified objectives of computing which can be used in any platform, any type of computing system, and for application automation, without making modifications in the form of structure, hardware, and software coding by an exte...
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Directory of Open Access Journals (Sweden)
René Felix Reinhart
2017-02-01
Full Text Available Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-02-08
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.
Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †
Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob
2017-01-01
Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697
International Conference on Computational Intelligence, Cyber Security, and Computational Models
Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn
2016-01-01
This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
Bang, Youngsuk
hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.
Strategy and gaps for modeling, simulation, and control of hybrid systems
Energy Technology Data Exchange (ETDEWEB)
Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Garcia, Humberto E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hovsapian, Rob [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mesina, George L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boardman, Richard D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-04-01
The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers, and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled
Some hybrid models applicable to dose-response relationships
International Nuclear Information System (INIS)
Kumazawa, Shigeru
1992-01-01
A new type of models of dose-response relationships has been studied as an initial stage to explore a reliable extrapolation of the relationships decided by high dose data to the range of low dose covered by radiation protection. The approach is to use a 'hybrid scale' of linear and logarithmic scales; the first model is that the normalized surviving fraction (ρ S > 0) in a hybrid scale decreases linearly with dose in a linear scale, and the second is that the induction in a log scale increases linearly with the normalized dose (τ D > 0) in a hybrid scale. The hybrid scale may reflect an overall effectiveness of a complex system against adverse events caused by various agents. Some data of leukemia in the atomic bomb survivors and of rodent experiments were used to show the applicability of hybrid scale models. The results proved that proposed models fit these data not less than the popular linear-quadratic models, providing the possible interpretation of shapes of dose-response curves, e.g. shouldered survival curves varied by recovery time. (author)
Directory of Open Access Journals (Sweden)
Zahra Pourabdollahi
2017-12-01
Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.
Computer modeling of liquid crystals
International Nuclear Information System (INIS)
Al-Barwani, M.S.
1999-01-01
In this thesis, we investigate several aspects of the behaviour of liquid crystal molecules near interfaces using computer simulation. We briefly discuss experiment, theoretical and computer simulation studies of some of the liquid crystal interfaces. We then describe three essentially independent research topics. The first of these concerns extensive simulations of a liquid crystal formed by long flexible molecules. We examined the bulk behaviour of the model and its structure. Studies of a film of smectic liquid crystal surrounded by vapour were also carried out. Extensive simulations were also done for a long-molecule/short-molecule mixture, studies were then carried out to investigate the liquid-vapour interface of the mixture. Next, we report the results of large scale simulations of soft-spherocylinders of two different lengths. We examined the bulk coexistence of the nematic and isotropic phases of the model. Once the bulk coexistence behaviour was known, properties of the nematic-isotropic interface were investigated. This was done by fitting order parameter and density profiles to appropriate mathematical functions and calculating the biaxial order parameter. We briefly discuss the ordering at the interfaces and make attempts to calculate the surface tension. Finally, in our third project, we study the effects of different surface topographies on creating bistable nematic liquid crystal devices. This was carried out using a model based on the discretisation of the free energy on a lattice. We use simulation to find the lowest energy states and investigate if they are degenerate in energy. We also test our model by studying the Frederiks transition and comparing with analytical and other simulation results. (author)
STAR - A computer language for hybrid AI applications
Borchardt, G. C.
1986-01-01
Constructing Artificial Intelligence application systems which rely on both symbolic and non-symbolic processing places heavy demands on the communication of data between dissimilar languages. This paper describes STAR (Simple Tool for Automated Reasoning), a computer language for the development of AI application systems which supports the transfer of data structures between a symbolic level and a non-symbolic level defined in languages such as FORTRAN, C and PASCAL. The organization of STAR is presented, followed by the description of an application involving STAR in the interpretation of airborne imaging spectrometer data.
Computer models for economic and silvicultural decisions
Rosalie J. Ingram
1989-01-01
Computer systems can help simplify decisionmaking to manage forest ecosystems. We now have computer models to help make forest management decisions by predicting changes associated with a particular management action. Models also help you evaluate alternatives. To be effective, the computer models must be reliable and appropriate for your situation.
A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment
International Nuclear Information System (INIS)
Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir
2015-01-01
This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL ® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0–238 N s m −1 through the viscous and electromagnetic components, respectively. (paper)
Modeling and Simulation of Multi-scale Environmental Systems with Generalized Hybrid Petri Nets
Directory of Open Access Journals (Sweden)
Mostafa eHerajy
2015-07-01
Full Text Available Predicting and studying the dynamics and properties of environmental systems necessitates the construction and simulation of mathematical models entailing different levels of complexities. Such type of computational experiments often require the combination of discrete and continuous variables as well as processes operating at different time scales. Furthermore, the iterative steps of constructing and analyzing environmental models might involve researchers with different background. Hybrid Petri nets may contribute in overcoming such challenges as they facilitate the implementation of systems integrating discrete and continuous dynamics. Additionally, the visual depiction of model components will inevitably help to bridge the gap between scientists with distinct expertise working on the same problem. Thus, modeling environmental systems with hybrid Petri nets enables the construction of complex processes while keeping the models comprehensible for researchers working on the same project with significantly divergent education path. In this paper we propose the utilization of a special class of hybrid Petri nets, Generalized Hybrid Petri Nets (GHPN, to model and simulate environmental systems exposing processes interacting at different time-scales. GHPN integrate stochastic and deterministic semantics as well as other types of special basic events. Moreover, a case study is presented to illustrate the use of GHPN in constructing and simulating multi-timescale environmental scenarios.
Hybrid programming model for implicit PDE simulations on multicore architectures
Kaushik, Dinesh; Keyes, David E.; Balay, Satish; Smith, Barry F.
2011-01-01
The complexity of programming modern multicore processor based clusters is rapidly rising, with GPUs adding further demand for fine-grained parallelism. This paper analyzes the performance of the hybrid (MPI+OpenMP) programming model in the context of an implicit unstructured mesh CFD code. At the implementation level, the effects of cache locality, update management, work division, and synchronization frequency are studied. The hybrid model presents interesting algorithmic opportunities as well: the convergence of linear system solver is quicker than the pure MPI case since the parallel preconditioner stays stronger when hybrid model is used. This implies significant savings in the cost of communication and synchronization (explicit and implicit). Even though OpenMP based parallelism is easier to implement (with in a subdomain assigned to one MPI process for simplicity), getting good performance needs attention to data partitioning issues similar to those in the message-passing case. © 2011 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Longjie Li
2018-01-01
Full Text Available In order to protect computing systems from malicious attacks, network intrusion detection systems have become an important part in the security infrastructure. Recently, hybrid models that integrating several machine learning techniques have captured more attention of researchers. In this paper, a novel hybrid model was proposed with the purpose of detecting network intrusion effectively. In the proposed model, Gini index is used to select the optimal subset of features, the gradient boosted decision tree (GBDT algorithm is adopted to detect network attacks, and the particle swarm optimization (PSO algorithm is utilized to optimize the parameters of GBDT. The performance of the proposed model is experimentally evaluated in terms of accuracy, detection rate, precision, F1-score, and false alarm rate using the NSL-KDD dataset. Experimental results show that the proposed model is superior to the compared methods.
International Nuclear Information System (INIS)
Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems
A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling
Tong, Cao; Gong, Haili
2018-03-01
This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.
Energy Technology Data Exchange (ETDEWEB)
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.
Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities
Baylin-Stern, Adam C.
This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.
Nuclear Hybrid Energy Systems FY16 Modeling Efforts at ORNL
Energy Technology Data Exchange (ETDEWEB)
Cetiner, Sacit M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Greenwood, Michael Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Harrison, Thomas J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Qualls, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Guler Yigitoglu, Askin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Fugate, David W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-09-01
A nuclear hybrid system uses a nuclear reactor as the basic power generation unit. The power generated by the nuclear reactor is utilized by one or more power customers as either thermal power, electrical power, or both. In general, a nuclear hybrid system will couple the nuclear reactor to at least one thermal power user in addition to the power conversion system. The definition and architecture of a particular nuclear hybrid system is flexible depending on local markets needs and opportunities. For example, locations in need of potable water may be best served by coupling a desalination plant to the nuclear system. Similarly, an area near oil refineries may have a need for emission-free hydrogen production. A nuclear hybrid system expands the nuclear power plant from its more familiar central power station role by diversifying its immediately and directly connected customer base. The definition, design, analysis, and optimization work currently performed with respect to the nuclear hybrid systems represents the work of three national laboratories. Idaho National Laboratory (INL) is the lead lab working with Argonne National Laboratory (ANL) and Oak Ridge National Laboratory. Each laboratory is providing modeling and simulation expertise for the integration of the hybrid system.
Ale, Angelique; Ermolayev, Vladimir; Herzog, Eva; Cohrs, Christian; de Angelis, Martin Hrabé; Ntziachristos, Vasilis
2012-06-01
The development of hybrid optical tomography methods to improve imaging performance has been suggested over a decade ago and has been experimentally demonstrated in animals and humans. Here we examined in vivo performance of a camera-based hybrid fluorescence molecular tomography (FMT) system for 360° imaging combined with X-ray computed tomography (XCT). Offering an accurately co-registered, information-rich hybrid data set, FMT-XCT has new imaging possibilities compared to stand-alone FMT and XCT. We applied FMT-XCT to a subcutaneous 4T1 tumor mouse model, an Aga2 osteogenesis imperfecta model and a Kras lung cancer mouse model, using XCT information during FMT inversion. We validated in vivo imaging results against post-mortem planar fluorescence images of cryoslices and histology data. Besides offering concurrent anatomical and functional information, FMT-XCT resulted in the most accurate FMT performance to date. These findings indicate that addition of FMT optics into the XCT gantry may be a potent upgrade for small-animal XCT systems.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
Static stiffness modeling of a novel hybrid redundant robot machine
International Nuclear Information System (INIS)
Li Ming; Wu Huapeng; Handroos, Heikki
2011-01-01
This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.
Modeling of hybrid vehicle fuel economy and fuel engine efficiency
Wu, Wei
"Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.
Hybrid VLSI/QCA Architecture for Computing FFTs
Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew
2003-01-01
A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.
Ultra-Short-Term Wind Power Prediction Using a Hybrid Model
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.
Winkelmann, Stefanie; Schütte, Christof
2017-09-01
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Winkelmann, Stefanie; Schütte, Christof
2017-09-21
Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.
2012-08-01
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.
Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface
Brunner, Clemens; Allison, Brendan Z.; Krusienski, Dean J.; Kaiser, Vera; Müller-Putz, Gernot R.; Pfurtscheller, Gert; Neuper, Christa
2012-01-01
In a conventional brain–computer interface (BCI) system, users perform mental tasks that yield specific patterns of brain activity. A pattern recognition system determines which brain activity pattern a user is producing and thereby infers the user’s mental task, allowing users to send messages or commands through brain activity alone. Unfortunately, despite extensive research to improve classification accuracy, BCIs almost always exhibit errors, which are sometimes so severe that effective communication is impossible. We recently introduced a new idea to improve accuracy, especially for users with poor performance. In an offline simulation of a “hybrid” BCI, subjects performed two mental tasks independently and then simultaneously. This hybrid BCI could use two different types of brain signals common in BCIs – event-related desynchronization (ERD) and steady-state evoked potentials (SSEPs). This study suggested that such a hybrid BCI is feasible. Here, we re-analyzed the data from our initial study. We explored eight different signal processing methods that aimed to improve classification and further assess both the causes and the extent of the benefits of the hybrid condition. Most analyses showed that the improved methods described here yielded a statistically significant improvement over our initial study. Some of these improvements could be relevant to conventional BCIs as well. Moreover, the number of illiterates could be reduced with the hybrid condition. Results are also discussed in terms of dual task interference and relevance to protocol design in hybrid BCIs. PMID:20153371
A hybrid source-driven method to compute fast neutron fluence in reactor pressure vessel - 017
International Nuclear Information System (INIS)
Ren-Tai, Chiang
2010-01-01
A hybrid source-driven method is developed to compute fast neutron fluence with neutron energy greater than 1 MeV in nuclear reactor pressure vessel (RPV). The method determines neutron flux by solving a steady-state neutron transport equation with hybrid neutron sources composed of peripheral fixed fission neutron sources and interior chain-reacted fission neutron sources. The relative rod-by-rod power distribution of the peripheral assemblies in a nuclear reactor obtained from reactor core depletion calculations and subsequent rod-by-rod power reconstruction is employed as the relative rod-by-rod fixed fission neutron source distribution. All fissionable nuclides other than U-238 (such as U-234, U-235, U-236, Pu-239 etc) are replaced with U-238 to avoid counting the fission contribution twice and to preserve fast neutron attenuation for heavy nuclides in the peripheral assemblies. An example is provided to show the feasibility of the method. Since the interior fuels only have a marginal impact on RPV fluence results due to rapid attenuation of interior fast fission neutrons, a generic set or one of several generic sets of interior fuels can be used as the driver and only the neutron sources in the peripheral assemblies will be changed in subsequent hybrid source-driven fluence calculations. Consequently, this hybrid source-driven method can simplify and reduce cost for fast neutron fluence computations. This newly developed hybrid source-driven method should be a useful and simplified tool for computing fast neutron fluence at selected locations of interest in RPV of contemporary nuclear power reactors. (authors)
Modeling, hybridization, and optimal charging of electrical energy storage systems
Parvini, Yasha
The rising rate of global energy demand alongside the dwindling fossil fuel resources has motivated research for alternative and sustainable solutions. Within this area of research, electrical energy storage systems are pivotal in applications including electrified vehicles, renewable power generation, and electronic devices. The approach of this dissertation is to elucidate the bottlenecks of integrating supercapacitors and batteries in energy systems and propose solutions by the means of modeling, control, and experimental techniques. In the first step, the supercapacitor cell is modeled in order to gain fundamental understanding of its electrical and thermal dynamics. The dependence of electrical parameters on state of charge (SOC), current direction and magnitude (20-200 A), and temperatures ranging from -40°C to 60°C was embedded in this computationally efficient model. The coupled electro-thermal model was parameterized using specifically designed temporal experiments and then validated by the application of real world duty cycles. Driving range is one of the major challenges of electric vehicles compared to combustion vehicles. In order to shed light on the benefits of hybridizing a lead-acid driven electric vehicle via supercapacitors, a model was parameterized for the lead-acid battery and combined with the model already developed for the supercapacitor, to build the hybrid battery-supercapacitor model. A hardware in the loop (HIL) setup consisting of a custom built DC/DC converter, micro-controller (muC) to implement the power management strategy, 12V lead-acid battery, and a 16.2V supercapacitor module was built to perform the validation experiments. Charging electrical energy storage systems in an efficient and quick manner, motivated to solve an optimal control problem with the objective of maximizing the charging efficiency for supercapacitors, lead-acid, and lithium ion batteries. Pontryagins minimum principle was used to solve the problems
Predictive simulation of bidirectional Glenn shunt using a hybrid blood vessel model.
Li, Hao; Leow, Wee Kheng; Chiu, Ing-Sh
2009-01-01
This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel's global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produc by the system are similar to real surgical results.
Hybrid attacks on model-based social recommender systems
Yu, Junliang; Gao, Min; Rong, Wenge; Li, Wentao; Xiong, Qingyu; Wen, Junhao
2017-10-01
With the growing popularity of the online social platform, the social network based approaches to recommendation emerged. However, because of the open nature of rating systems and social networks, the social recommender systems are susceptible to malicious attacks. In this paper, we present a certain novel attack, which inherits characteristics of the rating attack and the relation attack, and term it hybrid attack. Furtherly, we explore the impact of the hybrid attack on model-based social recommender systems in multiple aspects. The experimental results show that, the hybrid attack is more destructive than the rating attack in most cases. In addition, users and items with fewer ratings will be influenced more when attacked. Last but not the least, the findings suggest that spammers do not depend on the feedback links from normal users to become more powerful, the unilateral links can make the hybrid attack effective enough. Since unilateral links are much cheaper, the hybrid attack will be a great threat to model-based social recommender systems.
Frog: Asynchronous Graph Processing on GPU with Hybrid Coloring Model
Energy Technology Data Exchange (ETDEWEB)
Shi, Xuanhua; Luo, Xuan; Liang, Junling; Zhao, Peng; Di, Sheng; He, Bingsheng; Jin, Hai
2018-01-01
GPUs have been increasingly used to accelerate graph processing for complicated computational problems regarding graph theory. Many parallel graph algorithms adopt the asynchronous computing model to accelerate the iterative convergence. Unfortunately, the consistent asynchronous computing requires locking or atomic operations, leading to significant penalties/overheads when implemented on GPUs. As such, coloring algorithm is adopted to separate the vertices with potential updating conflicts, guaranteeing the consistency/correctness of the parallel processing. Common coloring algorithms, however, may suffer from low parallelism because of a large number of colors generally required for processing a large-scale graph with billions of vertices. We propose a light-weight asynchronous processing framework called Frog with a preprocessing/hybrid coloring model. The fundamental idea is based on Pareto principle (or 80-20 rule) about coloring algorithms as we observed through masses of realworld graph coloring cases. We find that a majority of vertices (about 80%) are colored with only a few colors, such that they can be read and updated in a very high degree of parallelism without violating the sequential consistency. Accordingly, our solution separates the processing of the vertices based on the distribution of colors. In this work, we mainly answer three questions: (1) how to partition the vertices in a sparse graph with maximized parallelism, (2) how to process large-scale graphs that cannot fit into GPU memory, and (3) how to reduce the overhead of data transfers on PCIe while processing each partition. We conduct experiments on real-world data (Amazon, DBLP, YouTube, RoadNet-CA, WikiTalk and Twitter) to evaluate our approach and make comparisons with well-known non-preprocessed (such as Totem, Medusa, MapGraph and Gunrock) and preprocessed (Cusha) approaches, by testing four classical algorithms (BFS, PageRank, SSSP and CC). On all the tested applications and
Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming
2015-08-01
The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.
Hybrid photovoltaic–thermal solar collectors dynamic modeling
International Nuclear Information System (INIS)
Amrizal, N.; Chemisana, D.; Rosell, J.I.
2013-01-01
Highlights: ► A hybrid photovoltaic/thermal dynamic model is presented. ► The model, once calibrated, can predict the power output for any set of climate data. ► The physical electrical model includes explicitly thermal and irradiance dependences. ► The results agree with those obtained through steady-state characterization. ► The model approaches the junction cell temperature through the system energy balance. -- Abstract: A hybrid photovoltaic/thermal transient model has been developed and validated experimentally. The methodology extends the quasi-dynamic thermal model stated in the EN 12975 in order to involve the electrical performance and consider the dynamic behavior minimizing constraints when characterizing the collector. A backward moving average filtering procedure has been applied to improve the model response for variable working conditions. Concerning the electrical part, the model includes the thermal and radiation dependences in its variables. The results revealed that the characteristic parameters included in the model agree reasonably well with the experimental values obtained from the standard steady-state and IV characteristic curve measurements. After a calibration process, the model is a suitable tool to predict the thermal and electrical performance of a hybrid solar collector, for a specific weather data set.
Hybrid magic state distillation for universal fault-tolerant quantum computation
Zheng, Wenqiang; Yu, Yafei; Pan, Jian; Zhang, Jingfu; Li, Jun; Li, Zhaokai; Suter, Dieter; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng
2014-01-01
A set of stabilizer operations augmented by some special initial states known as 'magic states', gives the possibility of universal fault-tolerant quantum computation. However, magic state preparation inevitably involves nonideal operations that introduce noise. The most common method to eliminate the noise is magic state distillation (MSD) by stabilizer operations. Here we propose a hybrid MSD protocol by connecting a four-qubit H-type MSD with a five-qubit T-type MSD, in order to overcome s...
A hybrid modelling approach to simulating foot-and-mouth disease outbreaks in Australian livestock
Directory of Open Access Journals (Sweden)
Richard A Bradhurst
2015-03-01
Full Text Available Foot-and-mouth disease (FMD is a highly contagious and economically important viral disease of cloven-hoofed animals. Australia's freedom from FMD underpins a valuable trade in live animals and animal products. An outbreak of FMD would result in the loss of export markets and cause severe disruption to domestic markets. The prevention of, and contingency planning for, FMD are of key importance to government, industry, producers and the community. The spread and control of FMD is complex and dynamic due to a highly contagious multi-host pathogen operating in a heterogeneous environment across multiple jurisdictions. Epidemiological modelling is increasingly being recognized as a valuable tool for investigating the spread of disease under different conditions and the effectiveness of control strategies. Models of infectious disease can be broadly classified as: population-based models that are formulated from the top-down and employ population-level relationships to describe individual-level behaviour, individual-based models that are formulated from the bottom-up and aggregate individual-level behaviour to reveal population-level relationships, or hybrid models which combine the two approaches into a single model.The Australian Animal Disease Spread (AADIS hybrid model employs a deterministic equation-based model (EBM to model within-herd spread of FMD, and a stochastic, spatially-explicit agent-based model (ABM to model between-herd spread and control. The EBM provides concise and computationally efficient predictions of herd prevalence and clinical signs over time. The ABM captures the complex, stochastic and heterogeneous environment in which an FMD epidemic operates. The AADIS event-driven hybrid EBM/ABM architecture is a flexible, efficient and extensible framework for modelling the spread and control of disease in livestock on a national scale. We present an overview of the AADIS hybrid approach and a description of the model
Qiu, Shanwen
2012-07-01
In this article, we propose a new grid-free and exact solution method for computing solutions associated with an hybrid traffic flow model based on the Lighthill- Whitham-Richards (LWR) partial differential equation. In this hybrid flow model, the vehicles satisfy the LWR equation whenever possible, and have a fixed acceleration otherwise. We first present a grid-free solution method for the LWR equation based on the minimization of component functions. We then show that this solution method can be extended to compute the solutions to the hybrid model by proper modification of the component functions, for any concave fundamental diagram. We derive these functions analytically for the specific case of a triangular fundamental diagram. We also show that the proposed computational method can handle fixed or moving bottlenecks.
A New Model for Baryogenesis through Hybrid Inflation
International Nuclear Information System (INIS)
Delepine, D.; Prieto, C. Martinez; Lopez, L. A. Urena
2009-01-01
We propose a hybrid inflation model with a complex waterfall field which contains an interaction term that breaks the U(1) global symmetry associated to the waterfall field charge. The asymmetric evolution of the real and imaginary parts of the complex field during the phase transition at the end of inflation translates into a charge asymmetry.
Model Predictive Control of the Hybrid Ventilation for Livestock
DEFF Research Database (Denmark)
Wu, Zhuang; Stoustrup, Jakob; Trangbæk, Klaus
2006-01-01
In this paper, design and simulation results of Model Predictive Control (MPC) strategy for livestock hybrid ventilation systems and associated indoor climate through variable valve openings and exhaust fans are presented. The design is based on thermal comfort parameters for poultry in barns...
A novel Monte Carlo approach to hybrid local volatility models
A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.
New Models of Hybrid Leadership in Global Higher Education
Tonini, Donna C.; Burbules, Nicholas C.; Gunsalus, C. K.
2016-01-01
This manuscript highlights the development of a leadership preparation program known as the Nanyang Technological University Leadership Academy (NTULA), exploring the leadership challenges unique to a university undergoing rapid growth in a highly multicultural context, and the hybrid model of leadership it developed in response to globalization.…
Modeling of Hybrid Growth Wastewater Bio-reactor
International Nuclear Information System (INIS)
EI Nashaei, S.; Garhyan, P.; Prasad, P.; Abdel Halim, H.S.; Ibrahim, G.
2004-01-01
The attached/suspended growth mixed reactors are considered one of the recently tried approaches to improve the performance of the biological treatment by increasing the volume of the accumulated biomass in terms of attached growth as well as suspended growth. Moreover, the domestic WW can be easily mixed with a high strength non-hazardous industrial wastewater and treated together in these bio-reactors if the need arises. Modeling of Hybrid hybrid growth wastewater reactor addresses the need of understanding the rational of such system in order to achieve better design and operation parameters. This paper aims at developing a heterogeneous mathematical model for hybrid growth system considering the effect of diffusion, external mass transfer, and power input to the system in a rational manner. The model will be based on distinguishing between liquid/solid phase (bio-film and bio-floc). This model would be a step ahead to the fine tuning the design of hybrid systems based on the experimental data of a pilot plant to be implemented in near future
Hybrid time/frequency domain modeling of nonlinear components
DEFF Research Database (Denmark)
Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth
2007-01-01
This paper presents a novel, three-phase hybrid time/frequency methodology for modelling of nonlinear components. The algorithm has been implemented in the DIgSILENT PowerFactory software using the DIgSILENT Programming Language (DPL), as a part of the work described in [1]. Modified HVDC benchmark...
Efficient Proof Engines for Bounded Model Checking of Hybrid Systems
DEFF Research Database (Denmark)
Fränzle, Martin; Herde, Christian
2005-01-01
In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...
Travelling Waves in Hybrid Chemotaxis Models
Franz, Benjamin; Xue, Chuan; Painter, Kevin J.; Erban, Radek
2013-01-01
. Bacteria are modelled using an agent-based (individual-based) approach with internal dynamics describing signal transduction. In addition to the chemotactic behaviour of the bacteria, the individual-based model also includes cell proliferation and death
Disciplines, models, and computers: the path to computational quantum chemistry.
Lenhard, Johannes
2014-12-01
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.
Computational biomechanics for medicine imaging, modeling and computing
Doyle, Barry; Wittek, Adam; Nielsen, Poul; Miller, Karol
2016-01-01
The Computational Biomechanics for Medicine titles provide an opportunity for specialists in computational biomechanics to present their latest methodologies and advancements. This volume comprises eighteen of the newest approaches and applications of computational biomechanics, from researchers in Australia, New Zealand, USA, UK, Switzerland, Scotland, France and Russia. Some of the interesting topics discussed are: tailored computational models; traumatic brain injury; soft-tissue mechanics; medical image analysis; and clinically-relevant simulations. One of the greatest challenges facing the computational engineering community is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. We hope the research presented within this book series will contribute to overcoming this grand challenge.
Hybrid modelling of soil-structure interaction for embedded structures
International Nuclear Information System (INIS)
Gupta, S.; Penzien, J.
1981-01-01
The basic methods currently being used for the analysis of soil-structure interaction fail to properly model three-dimensional embedded structures with flexible foundations. A hybrid model for the analysis of soil-structure interaction is developed in this investigation which takes advantage of the desirable features of both the finite element and substructure methods and which minimizes their undesirable features. The hybrid model is obtained by partitioning the total soil-structure system into a nearfield and a far-field with a smooth hemispherical interface. The near-field consists of the structure and a finite region of soil immediately surrounding its base. The entire near-field may be modelled in three-dimensional form using the finite element method; thus, taking advantage of its ability to model irregular geometries, and the non-linear soil behavior in the immediate vicinity of the structure. (orig./WL)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Cloud computing-based energy optimization control framework for plug-in hybrid electric bus
International Nuclear Information System (INIS)
Yang, Chao; Li, Liang; You, Sixiong; Yan, Bingjie; Du, Xian
2017-01-01
Considering the complicated characteristics of traffic flow in city bus route and the nonlinear vehicle dynamics, optimal energy management integrated with clustering and recognition of driving conditions in plug-in hybrid electric bus is still a challenging problem. Motivated by this issue, this paper presents an innovative energy optimization control framework based on the cloud computing for plug-in hybrid electric bus. This framework, which includes offline part and online part, can realize the driving conditions clustering in offline part, and the energy management in online part. In offline part, utilizing the operating data transferred from a bus to the remote monitoring center, K-means algorithm is adopted to cluster the driving conditions, and then Markov probability transfer matrixes are generated to predict the possible operating demand of the bus driver. Next in online part, the current driving condition is real-time identified by a well-trained support vector machine, and Markov chains-based driving behaviors are accordingly selected. With the stochastic inputs, stochastic receding horizon control method is adopted to obtain the optimized energy management of hybrid powertrain. Simulations and hardware-in-loop test are carried out with the real-world city bus route, and the results show that the presented strategy could greatly improve the vehicle fuel economy, and as the traffic flow data feedback increases, the fuel consumption of every plug-in hybrid electric bus running in a specific bus route tends to be a stable minimum. - Highlights: • Cloud computing-based energy optimization control framework is proposed. • Driving cycles are clustered into 6 types by K-means algorithm. • Support vector machine is employed to realize the online recognition of driving condition. • Stochastic receding horizon control-based energy management strategy is designed for plug-in hybrid electric bus. • The proposed framework is verified by simulation and hard
Formal verification of dynamic hybrid systems: a NuSMV-based model checking approach
Directory of Open Access Journals (Sweden)
Xu Zhi
2018-01-01
Full Text Available Software security is an important and challenging research topic in developing dynamic hybrid embedded software systems. Ensuring the correct behavior of these systems is particularly difficult due to the interactions between the continuous subsystem and the discrete subsystem. Currently available security analysis methods for system risks have been limited, as they rely on manual inspections of the individual subsystems under simplifying assumptions. To improve this situation, a new approach is proposed that is based on the symbolic model checking tool NuSMV. A dual PID system is used as an example system, for which the logical part and the computational part of the system are modeled in a unified manner. Constraints are constructed on the controlled object, and a counter-example path is ultimately generated, indicating that the hybrid system can be analyzed by the model checking tool.
Mathematical Modelling of a Hybrid Micro-Cogeneration Group Based on a Four Stroke Diesel Engine
Directory of Open Access Journals (Sweden)
Apostol Valentin
2014-06-01
Full Text Available The paper presents a part of the work conducted in the first stage of a Research Grant called ”Hybrid micro-cogeneration group of high efficiency equipped with an electronically assisted ORC” acronym GRUCOHYB. The hybrid micro-cogeneration group is equipped with a four stroke Diesel engine having a maximum power of 40 kW. A mathematical model of the internal combustion engine is presented. The mathematical model is developed based on the Laws of Thermodynamics and takes into account the real, irreversible processes. Based on the mathematical model a computation program was developed. The results obtained were compared with those provided by the Diesel engine manufacturer. Results show a very high correlation between the manufacturer’s data and the simulation results for an engine running at 100% load. Future developments could involve using an exergetic analysis to show the ability of the ORC to generate electricity from recovered heat
Computer modeling of the gyrocon
International Nuclear Information System (INIS)
Tallerico, P.J.; Rankin, J.E.
1979-01-01
A gyrocon computer model is discussed in which the electron beam is followed from the gun output to the collector region. The initial beam may be selected either as a uniform circular beam or may be taken from the output of an electron gun simulated by the program of William Herrmannsfeldt. The fully relativistic equations of motion are then integrated numerically to follow the beam successively through a drift tunnel, a cylindrical rf beam deflection cavity, a combination drift space and magnetic bender region, and an output rf cavity. The parameters for each region are variable input data from a control file. The program calculates power losses in the cavity wall, power required by beam loading, power transferred from the beam to the output cavity fields, and electronic and overall efficiency. Space-charge effects are approximated if selected. Graphical displays of beam motions are produced. We discuss the Los Alamos Scientific Laboratory (LASL) prototype design as an example of code usage. The design shows a gyrocon of about two-thirds megawatt output at 450 MHz with up to 86% overall efficiency
The Fermilab central computing facility architectural model
International Nuclear Information System (INIS)
Nicholls, J.
1989-01-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)
The Fermilab Central Computing Facility architectural model
International Nuclear Information System (INIS)
Nicholls, J.
1989-05-01
The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs
A hybrid parallel framework for the cellular Potts model simulations
Energy Technology Data Exchange (ETDEWEB)
Jiang, Yi [Los Alamos National Laboratory; He, Kejing [SOUTH CHINA UNIV; Dong, Shoubin [SOUTH CHINA UNIV
2009-01-01
The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approach achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).
Modelling and analysis of real-time and hybrid systems
Energy Technology Data Exchange (ETDEWEB)
Olivero, A
1994-09-29
This work deals with the modelling and analysis of real-time and hybrid systems. We first present the timed-graphs as model for the real-time systems and we recall the basic notions of the analysis of real-time systems. We describe the temporal properties on the timed-graphs using TCTL formulas. We consider two methods for property verification: in one hand we study the symbolic model-checking (based on backward analysis) and in the other hand we propose a verification method derived of the construction of the simulation graph (based on forward analysis). Both methods have been implemented within the KRONOS verification tool. Their application for the automatic verification on several real-time systems confirms the practical interest of our approach. In a second part we study the hybrid systems, systems combining discrete components with continuous ones. As in the general case the analysis of this king of systems is not decidable, we identify two sub-classes of hybrid systems and we give a construction based method for the generation of a timed-graph from an element into the sub-classes. We prove that in one case the timed-graph obtained is bi-similar with the considered system and that there exists a simulation in the other case. These relationships allow the application of the described technics on the hybrid systems into the defined sub-classes. (authors). 60 refs., 43 figs., 8 tabs., 2 annexes.
A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media
Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.
2017-12-01
Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.
A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale
Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; Bochev, Pavel
2017-11-01
We develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson-Nernst-Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independently on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.
Modeling Hybridization Kinetics of Gene Probes in a DNA Biochip Using FEMLAB
Munir, Ahsan; Waseem, Hassan; Williams, Maggie R.; Stedtfeld, Robert D.; Gulari, Erdogan; Tiedje, James M.; Hashsham, Syed A.
2017-01-01
Microfluidic DNA biochips capable of detecting specific DNA sequences are useful in medical diagnostics, drug discovery, food safety monitoring and agriculture. They are used as miniaturized platforms for analysis of nucleic acids-based biomarkers. Binding kinetics between immobilized single stranded DNA on the surface and its complementary strand present in the sample are of interest. To achieve optimal sensitivity with minimum sample size and rapid hybridization, ability to predict the kinetics of hybridization based on the thermodynamic characteristics of the probe is crucial. In this study, a computer aided numerical model for the design and optimization of a flow-through biochip was developed using a finite element technique packaged software tool (FEMLAB; package included in COMSOL Multiphysics) to simulate the transport of DNA through a microfluidic chamber to the reaction surface. The model accounts for fluid flow, convection and diffusion in the channel and on the reaction surface. Concentration, association rate constant, dissociation rate constant, recirculation flow rate, and temperature were key parameters affecting the rate of hybridization. The model predicted the kinetic profile and signal intensities of eighteen 20-mer probes targeting vancomycin resistance genes (VRGs). Predicted signal intensities and hybridization kinetics strongly correlated with experimental data in the biochip (R2 = 0.8131). PMID:28555058
Modeling Hybridization Kinetics of Gene Probes in a DNA Biochip Using FEMLAB
Directory of Open Access Journals (Sweden)
Ahsan Munir
2017-05-01
Full Text Available Microfluidic DNA biochips capable of detecting specific DNA sequences are useful in medical diagnostics, drug discovery, food safety monitoring and agriculture. They are used as miniaturized platforms for analysis of nucleic acids-based biomarkers. Binding kinetics between immobilized single stranded DNA on the surface and its complementary strand present in the sample are of interest. To achieve optimal sensitivity with minimum sample size and rapid hybridization, ability to predict the kinetics of hybridization based on the thermodynamic characteristics of the probe is crucial. In this study, a computer aided numerical model for the design and optimization of a flow-through biochip was developed using a finite element technique packaged software tool (FEMLAB; package included in COMSOL Multiphysics to simulate the transport of DNA through a microfluidic chamber to the reaction surface. The model accounts for fluid flow, convection and diffusion in the channel and on the reaction surface. Concentration, association rate constant, dissociation rate constant, recirculation flow rate, and temperature were key parameters affecting the rate of hybridization. The model predicted the kinetic profile and signal intensities of eighteen 20-mer probes targeting vancomycin resistance genes (VRGs. Predicted signal intensities and hybridization kinetics strongly correlated with experimental data in the biochip (R2 = 0.8131.
Haqiq, Abdelkrim; Alimi, Adel; Mezzour, Ghita; Rokbani, Nizar; Muda, Azah
2017-01-01
This book presents the latest research in hybrid intelligent systems. It includes 57 carefully selected papers from the 16th International Conference on Hybrid Intelligent Systems (HIS 2016) and the 8th World Congress on Nature and Biologically Inspired Computing (NaBIC 2016), held on November 21–23, 2016 in Marrakech, Morocco. HIS - NaBIC 2016 was jointly organized by the Machine Intelligence Research Labs (MIR Labs), USA; Hassan 1st University, Settat, Morocco and University of Sfax, Tunisia. Hybridization of intelligent systems is a promising research field in modern artificial/computational intelligence and is concerned with the development of the next generation of intelligent systems. The conference’s main aim is to inspire further exploration of the intriguing potential of hybrid intelligent systems and bio-inspired computing. As such, the book is a valuable resource for practicing engineers /scientists and researchers working in the field of computational intelligence and artificial intelligence.
A hybrid society model for simulating residential electricity consumption
Energy Technology Data Exchange (ETDEWEB)
Xu, Minjie [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China); State Power Economic Research Institute, Beijing (China); Hu, Zhaoguang [State Power Economic Research Institute, Beijing (China); Wu, Junyong; Zhou, Yuhui [School of Electrical Engineering, Beijing Jiaotong University, Beijing (China)
2008-12-15
In this paper, a hybrid social model of econometric model and social influence model is proposed for evaluating the influence of pricing policy and public education policy on residential habit of electricity using in power resources management. And, a hybrid society simulation platform based on the proposed model, called residential electricity consumption multi-agent systems (RECMAS), is designed for simulating residential electricity consumption by multi-agent system. RECMAS is composed of consumer agent, power supplier agent, and policy maker agent. It provides the policy makers with a useful tool to evaluate power price policies and public education campaigns in different scenarios. According to an influenced diffusion mechanism, RECMAS can simulate the residential electricity demand-supply chain and analyze impacts of the factors on residential electricity consumption. Finally, the proposed method is used to simulate urban residential electricity consumption in China. (author)
A hybrid society model for simulating residential electricity consumption
International Nuclear Information System (INIS)
Xu, Minjie; Hu, Zhaoguang; Wu, Junyong; Zhou, Yuhui
2008-01-01
In this paper, a hybrid social model of econometric model and social influence model is proposed for evaluating the influence of pricing policy and public education policy on residential habit of electricity using in power resources management. And, a hybrid society simulation platform based on the proposed model, called residential electricity consumption multi-agent systems (RECMAS), is designed for simulating residential electricity consumption by multi-agent system. RECMAS is composed of consumer agent, power supplier agent, and policy maker agent. It provides the policy makers with a useful tool to evaluate power price policies and public education campaigns in different scenarios. According to an influenced diffusion mechanism, RECMAS can simulate the residential electricity demand-supply chain and analyze impacts of the factors on residential electricity consumption. Finally, the proposed method is used to simulate urban residential electricity consumption in China. (author)
Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim
2014-01-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using
Quantum vertex model for reversible classical computing.
Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C
2017-05-12
Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.
Modeling Computer Virus and Its Dynamics
Directory of Open Access Journals (Sweden)
Mei Peng
2013-01-01
Full Text Available Based on that the computer will be infected by infected computer and exposed computer, and some of the computers which are in suscepitible status and exposed status can get immunity by antivirus ability, a novel coumputer virus model is established. The dynamic behaviors of this model are investigated. First, the basic reproduction number R0, which is a threshold of the computer virus spreading in internet, is determined. Second, this model has a virus-free equilibrium P0, which means that the infected part of the computer disappears, and the virus dies out, and P0 is a globally asymptotically stable equilibrium if R01 then this model has only one viral equilibrium P*, which means that the computer persists at a constant endemic level, and P* is also globally asymptotically stable. Finally, some numerical examples are given to demonstrate the analytical results.
The semantics of hybrid process models
Slaats, T.; Schunselaar, D.M.M.; Maggi, F.M.; Reijers, H.A.; Debruyne, C.; Panetto, H.; Meersman, R.; Dillon, T.; Kuhn, E.; O'Sullivan, D.; Agostino Ardagna, C.
2016-01-01
In the area of business process modelling, declarative notations have been proposed as alternatives to notations that follow the dominant, imperative paradigm. Yet, the choice between an imperative or declarative style of modelling is not always easy to make. Instead, a mixture of these styles is
Development of hybrid 3-D hydrological modeling for the NCAR Community Earth System Model (CESM)
Energy Technology Data Exchange (ETDEWEB)
Zeng, Xubin [Univ. of Arizona, Tucson, AZ (United States); Troch, Peter [Univ. of Arizona, Tucson, AZ (United States); Pelletier, Jon [Univ. of Arizona, Tucson, AZ (United States); Niu, Guo-Yue [Univ. of Arizona, Tucson, AZ (United States); Gochis, David [NCAR Research Applications (RAL), Boulder, CO (United States)
2015-11-15
This is the Final Report of our four-year (3-year plus one-year no cost extension) collaborative project between the University of Arizona (UA) and the National Center for Atmospheric Research (NCAR). The overall objective of our project is to develop and evaluate the first hybrid 3-D hydrological model with a horizontal grid spacing of 1 km for the NCAR Community Earth System Model (CESM). We have made substantial progress in model development and evaluation, computational efficiencies and software engineering, and data development and evaluation, as discussed in Sections 2-4. Section 5 presents our success in data dissemination, while Section 6 discusses the scientific impacts of our work. Section 7 discusses education and mentoring success of our project, while Section 8 lists our relevant DOE services. All peer-reviewed papers that acknowledged this project are listed in Section 9. Highlights of our achievements include: • We have finished 20 papers (most published already) on model development and evaluation, computational efficiencies and software engineering, and data development and evaluation • The global datasets developed under this project have been permanently archived and publicly available • Some of our research results have already been implemented in WRF and CLM • Patrick Broxton and Michael Brunke have received their Ph.D. • PI Zeng has served on DOE proposal review panels and DOE lab scientific focus area (SFA) review panels
Modeling And Simulation As The Basis For Hybridity In The Graphic Discipline Learning/Teaching Area
Directory of Open Access Journals (Sweden)
Jana Žiljak Vujić
2009-01-01
Full Text Available Only some fifteen years have passed since the scientific graphics discipline was established. In the transition period from the College of Graphics to «Integrated Graphic Technology Studies» to the contemporary Faculty of Graphics Arts with the University in Zagreb, three main periods of development can be noted: digital printing, computer prepress and automatic procedures in postpress packaging production. Computer technology has enabled a change in the methodology of teaching graphics technology and studying it on the level of secondary and higher education. The task has been set to create tools for simulating printing processes in order to master the program through a hybrid system consisting of methods that are separate in relation to one another: learning with the help of digital models and checking in the actual real system. We are setting a hybrid project for teaching because the overall acquired knowledge is the result of completely different methods. The first method is on the free programs level functioning without consequences. Everything remains as a record in the knowledge database that can be analyzed, statistically processed and repeated with new parameter values of the system being researched. The second method uses the actual real system where the results are in proving the value of new knowledge and this is something that encourages and stimulates new cycles of hybrid behavior in mastering programs. This is the area where individual learning incurs. The hybrid method allows the possibility of studying actual situations on a computer model, proving it on an actual real model and entering the area of learning envisaging future development.
Modeling and Simulation as the Basis for Hybridity in the Graphic Discipline Learning/Teaching Area
Directory of Open Access Journals (Sweden)
Vilko Ziljak
2009-11-01
Full Text Available Only some fifteen years have passed since the scientific graphics discipline was established. In the transition period from the College of Graphics to «Integrated Graphic Technology Studies» to the contemporary Faculty of Graphics Arts with the University in Zagreb, three main periods of development can be noted: digital printing, computer prepress and automatic procedures in postpress packaging production. Computer technology has enabled a change in the methodology of teaching graphics technology and studying it on the level of secondary and higher education. The task has been set to create tools for simulating printing processes in order to master the program through a hybrid system consisting of methods that are separate in relation to one another: learning with the help of digital models and checking in the actual real system. We are setting a hybrid project for teaching because the overall acquired knowledge is the result of completely different methods. The first method is on the free programs level functioning without consequences. Everything remains as a record in the knowledge database that can be analyzed, statistically processed and repeated with new parameter values of the system being researched. The second method uses the actual real system where the results are in proving the value of new knowledge and this is something that encourages and stimulates new cycles of hybrid behavior in mastering programs. This is the area where individual learning incurs. The hybrid method allows the possibility of studying actual situations on a computer model, proving it on an actual real model and entering the area of learning envisaging future development.
A Hybrid Model for Forecasting Sales in Turkish Paint Industry
Alp Ustundag
2009-01-01
Sales forecasting is important for facilitating effective and efficient allocation of scarce resources. However, how to best model and forecast sales has been a long-standing issue. There is no best forecasting method that is applicable in all circumstances. Therefore, confidence in the accuracy of sales forecasts is achieved by corroborating the results using two or more methods. This paper proposes a hybrid forecasting model that uses an artificial intelligence method (AI) w...
Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model
Directory of Open Access Journals (Sweden)
Bogdan Gliwa
2011-01-01
Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.
Apricot - An Object-Oriented Modeling Language for Hybrid Systems
Fang, Huixing; Zhu, Huibiao; Shi, Jianqi
2013-01-01
We propose Apricot as an object-oriented language for modeling hybrid systems. The language combines the features in domain specific language and object-oriented language, that fills the gap between design and implementation, as a result, we put forward the modeling language with simple and distinct syntax, structure and semantics. In addition, we introduce the concept of design by convention into Apricot.As the characteristic of object-oriented and the component architecture in Apricot, we c...
Hierarchical models and iterative optimization of hybrid systems
Energy Technology Data Exchange (ETDEWEB)
Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)
2016-06-08
A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.
Hybrid Modeling of Intra-DCT Coefficients for Real-Time Video Encoding
Directory of Open Access Journals (Sweden)
Li Jin
2008-01-01
Full Text Available Abstract The two-dimensional discrete cosine transform (2-D DCT and its subsequent quantization are widely used in standard video encoders. However, since most DCT coefficients become zeros after quantization, a number of redundant computations are performed. This paper proposes a hybrid statistical model used to predict the zeroquantized DCT (ZQDCT coefficients for intratransform and to achieve better real-time performance. First, each pixel block at the input of DCT is decomposed into a series of mean values and a residual block. Subsequently, a statistical model based on Gaussian distribution is used to predict the ZQDCT coefficients of the residual block. Then, a sufficient condition under which each quantized coefficient becomes zero is derived from the mean values. Finally, a hybrid model to speed up the DCT and quantization calculations is proposed. Experimental results show that the proposed model can reduce more redundant computations and achieve better real-time performance than the reference in the literature at the cost of negligible video quality degradation. Experiments also show that the proposed model significantly reduces multiplications for DCT and quantization. This is particularly suitable for processors in portable devices where multiplications consume more power than additions. Computational reduction implies longer battery lifetime and energy economy.
A model for particle acceleration in lower hybrid collapse
International Nuclear Information System (INIS)
Retterer, J.M.
1997-01-01
A model for particle acceleration during the nonlinear collapse of lower hybrid waves is described. Using the Musher-Sturman wave equation to describe the effects of nonlinear processes and a velocity diffusion equation for the particle velocity distribution, the model self-consistently describes the exchange of energy between the fields and the particles in the local plasma. Two-dimensional solutions are presented for the modulational instability of a plane wave and the collapse of a cylindrical wave packet. These calculations were motivated by sounding rocket observations in the vicinity of auroral arcs in the Earth close-quote s ionosphere, which have revealed the existence of large-amplitude lower-hybrid wave packets associated with ions accelerated to energies of 100 eV. The scaling of the sizes of these wave packets is consistent with the theory of lower-hybrid collapse and the observed lower-hybrid field amplitudes are adequate to accelerate the ionospheric ions to the observed energies
Computational nanophotonics modeling and applications
Musa, Sarhan M
2013-01-01
This reference offers tools for engineers, scientists, biologists, and others working with the computational techniques of nanophotonics. It introduces the key concepts of computational methods in a manner that is easily digestible for newcomers to the field. The book also examines future applications of nanophotonics in the technical industry and covers new developments and interdisciplinary research in engineering, science, and medicine. It provides an overview of the key computational nanophotonics and describes the technologies with an emphasis on how they work and their key benefits.
Energy Technology Data Exchange (ETDEWEB)
García-Melchor, Max [SUNCAT Center for Interface Science and Catalysis, Department of Chemical Engineering, Stanford University, Stanford CA (United States); Vilella, Laia [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST),Tarragona (Spain); Departament de Quimica, Universitat Autonoma de Barcelona, Barcelona (Spain); López, Núria [Institute of Chemical Research of Catalonia (ICIQ), The Barcelona Institute of Science and Technology (BIST), Tarragona (Spain); Vojvodic, Aleksandra [SUNCAT Center for Interface Science and Catalysis, SLAC National Accelerator Laboratory, Menlo Park CA (United States)
2016-04-29
An attractive strategy to improve the performance of water oxidation catalysts would be to anchor a homogeneous molecular catalyst on a heterogeneous solid surface to create a hybrid catalyst. The idea of this combined system is to take advantage of the individual properties of each of the two catalyst components. We use Density Functional Theory to determine the stability and activity of a model hybrid water oxidation catalyst consisting of a dimeric Ir complex attached on the IrO_{2}(110) surface through two oxygen atoms. We find that homogeneous catalysts can be bound to its matrix oxide without losing significant activity. Hence, designing hybrid systems that benefit from both the high tunability of activity of homogeneous catalysts and the stability of heterogeneous systems seems feasible.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Little, Matthew; Cordero, Eugene
2014-01-01
Purpose: This paper aims to investigate the relationship between hybrid classes (where a per cent of the class meetings are online) and transportation-related CO[subscript 2] emissions at a commuter campus similar to San José State University (SJSU). Design/methodology/approach: A computer model was developed to calculate the number of trips to…
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
International Nuclear Information System (INIS)
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd
2016-01-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare
Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing
Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.
2015-12-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd
2016-10-01
Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare
Pervasive Computing and Prosopopoietic Modelling
DEFF Research Database (Denmark)
Michelsen, Anders Ib
2011-01-01
the mid-20th century of a paradoxical distinction/complicity between the technical organisation of computed function and the human Being, in the sense of creative action upon such function. This paradoxical distinction/complicity promotes a chiastic (Merleau-Ponty) relationship of extension of one......This article treats the philosophical underpinnings of the notions of ubiquity and pervasive computing from a historical perspective. The current focus on these notions reflects the ever increasing impact of new media and the underlying complexity of computed function in the broad sense of ICT...... that have spread vertiginiously since Mark Weiser coined the term ‘pervasive’, e.g., digitalised sensoring, monitoring, effectuation, intelligence, and display. Whereas Weiser’s original perspective may seem fulfilled since computing is everywhere, in his and Seely Brown’s (1997) terms, ‘invisible...
A Hybrid Model for Forecasting Sales in Turkish Paint Industry
Directory of Open Access Journals (Sweden)
Alp Ustundag
2009-12-01
Full Text Available Sales forecasting is important for facilitating effective and efficient allocation of scarce resources. However, how to best model and forecast sales has been a long-standing issue. There is no best forecasting method that is applicable in all circumstances. Therefore, confidence in the accuracy of sales forecasts is achieved by corroborating the results using two or more methods. This paper proposes a hybrid forecasting model that uses an artificial intelligence method (AI with multiple linear regression (MLR to predict product sales for the largest Turkish paint producer. In the hybrid model, three different AI methods, fuzzy rule-based system (FRBS, artificial neural network (ANN and adaptive neuro fuzzy network (ANFIS, are used and compared to each other. The results indicate that FRBS yields better forecasting accuracy in terms of root mean squared error (RMSE and mean absolute percentage error (MAPE.
A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features
Directory of Open Access Journals (Sweden)
Chen Li
2018-03-01
Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.
Hybrid neural network bushing model for vehicle dynamics simulation
International Nuclear Information System (INIS)
Sohn, Jeong Hyun; Lee, Seung Kyu; Yoo, Wan Suk
2008-01-01
Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers
Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.
Chen, Yangzhou; Guo, Yuqi; Wang, Ying
2017-03-29
In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
Directory of Open Access Journals (Sweden)
Changsheng Zhu
2018-03-01
Full Text Available In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models
Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.
2016-01-01
A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
Hajarolasvadi, Setare; Elbanna, Ahmed E.
2017-11-01
The finite difference (FD) and the spectral boundary integral (SBI) methods have been used extensively to model spontaneously-propagating shear cracks in a variety of engineering and geophysical applications. In this paper, we propose a new modelling approach in which these two methods are combined through consistent exchange of boundary tractions and displacements. Benefiting from the flexibility of FD and the efficiency of SBI methods, the proposed hybrid scheme will solve a wide range of problems in a computationally efficient way. We demonstrate the validity of the approach using two examples for dynamic rupture propagation: one in the presence of a low-velocity layer and the other in which off-fault plasticity is permitted. We discuss possible potential uses of the hybrid scheme in earthquake cycle simulations as well as an exact absorbing boundary condition.
PUMP: analog-hybrid reactor coolant hydraulic transient model
International Nuclear Information System (INIS)
Grandia, M.R.
1976-03-01
The PUMP hybrid computer code simulates flow and pressure distribution; it is used to determine real time response to starting and tripping all combinations of PWR reactor coolant pumps in a closed, pressurized, four-pump, two-loop primary system. The simulation includes the description of flow, pressure, speed, and torque relationships derived through pump affinity laws and from vendor-supplied pump zone maps to describe pump dynamic characteristics. The program affords great flexibility in the type of transients that can be simulated
The Cheshire Cat principle for hybrid bag models
International Nuclear Information System (INIS)
Nielsen, H.B.
1987-05-01
The Cheshire Cat point of view where the bag in the chiral bag model has no physical significance, but only a notational one is argued for. It is explained how a fermion - in, say, a 1+1 dimensional exact Cheshire Cat model - escapes the bag by means of an anomaly. The possibility to construct sophisticated hybrid bag models is suggested which use the lack of physical significance of the bag to fix the many parameters so as to anyway give hope of a phenomenologically sensible model. (orig.)
Modelling of a Hybrid Energy System for Autonomous Application
Directory of Open Access Journals (Sweden)
Yang He
2013-10-01
Full Text Available A hybrid energy system (HES is a trending power supply solution for autonomous devices. With the help of an accurate system model, the HES development will be efficient and oriented. In spite of various precise unit models, a HES system is hardly developed. This paper proposes a system modelling approach, which applies the power flux conservation as the governing equation and adapts and modifies unit models of solar cells, piezoelectric generators, a Li-ion battery and a super-capacitor. A generalized power harvest, storage and management strategy is also suggested to adapt to various application scenarios.
All-optical quantum computing with a hybrid solid-state processing unit
International Nuclear Information System (INIS)
Pei Pei; Zhang Fengyang; Li Chong; Song Heshan
2011-01-01
We develop an architecture of a hybrid quantum solid-state processing unit for universal quantum computing. The architecture allows distant and nonidentical solid-state qubits in distinct physical systems to interact and work collaboratively. All the quantum computing procedures are controlled by optical methods using classical fields and cavity QED. Our methods have a prominent advantage of the insensitivity to dissipation process benefiting from the virtual excitation of subsystems. Moreover, the quantum nondemolition measurements and state transfer for the solid-state qubits are proposed. The architecture opens promising perspectives for implementing scalable quantum computation in a broader sense that different solid-state systems can merge and be integrated into one quantum processor afterward.
Calculation of the Initial Magnetic Field for Mercury's Magnetosphere Hybrid Model
Alexeev, Igor; Parunakian, David; Dyadechkin, Sergey; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku
2018-03-01
Several types of numerical models are used to analyze the interactions of the solar wind flow with Mercury's magnetosphere, including kinetic models that determine magnetic and electric fields based on the spatial distribution of charges and currents, magnetohydrodynamic models that describe plasma as a conductive liquid, and hybrid models that describe ions kinetically in collisionless mode and represent electrons as a massless neutralizing liquid. The structure of resulting solutions is determined not only by the chosen set of equations that govern the behavior of plasma, but also by the initial and boundary conditions; i.e., their effects are not limited to the amount of computational work required to achieve a quasi-stationary solution. In this work, we have proposed using the magnetic field computed by the paraboloid model of Mercury's magnetosphere as the initial condition for subsequent hybrid modeling. The results of the model have been compared to measurements performed by the Messenger spacecraft during a single crossing of the magnetosheath and the magnetosphere. The selected orbit lies in the terminator plane, which allows us to observe two crossings of the bow shock and the magnetopause. In our calculations, we have defined the initial parameters of the global magnetospheric current systems in a way that allows us to minimize paraboloid magnetic field deviation along the trajectory of the Messenger from the experimental data. We have shown that the optimal initial field parameters include setting the penetration of a partial interplanetary magnetic field into the magnetosphere with a penetration coefficient of 0.2.
A novel hybrid approach with multidimensional-like effects for compressible flow computations
Kalita, Paragmoni; Dass, Anoop K.
2017-07-01
A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.
Climate Ocean Modeling on Parallel Computers
Wang, P.; Cheng, B. N.; Chao, Y.
1998-01-01
Ocean modeling plays an important role in both understanding the current climatic conditions and predicting future climate change. However, modeling the ocean circulation at various spatial and temporal scales is a very challenging computational task.
Computational Intelligence. Mortality Models for the Actuary
Willemse, W.J.
2001-01-01
This thesis applies computational intelligence to the field of actuarial (insurance) science. In particular, this thesis deals with life insurance where mortality modelling is important. Actuaries use ancient models (mortality laws) from the nineteenth century, for example Gompertz' and Makeham's
Cassereau, Didier; Nauleau, Pierre; Bendjoudi, Aniss; Minonzio, Jean-Gabriel; Laugier, Pascal; Bossy, Emmanuel; Grimal, Quentin
2014-07-01
The development of novel quantitative ultrasound (QUS) techniques to measure the hip is critically dependent on the possibility to simulate the ultrasound propagation. One specificity of hip QUS is that ultrasounds propagate through a large thickness of soft tissue, which can be modeled by a homogeneous fluid in a first approach. Finite difference time domain (FDTD) algorithms have been widely used to simulate QUS measurements but they are not adapted to simulate ultrasonic propagation over long distances in homogeneous media. In this paper, an hybrid numerical method is presented to simulate hip QUS measurements. A two-dimensional FDTD simulation in the vicinity of the bone is coupled to the semi-analytic calculation of the Rayleigh integral to compute the wave propagation between the probe and the bone. The method is used to simulate a setup dedicated to the measurement of circumferential guided waves in the cortical compartment of the femoral neck. The proposed approach is validated by comparison with a full FDTD simulation and with an experiment on a bone phantom. For a realistic QUS configuration, the computation time is estimated to be sixty times less with the hybrid method than with a full FDTD approach. Copyright © 2013 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Chin, Vun Jack; Salam, Zainal; Ishaque, Kashif
2016-01-01
Highlights: • An accurate computational method for the two-diode model of PV module is proposed. • The hybrid method employs analytical equations and Differential Evolution (DE). • I PV , I o1 , and R p are computed analytically, while a 1 , a 2 , I o2 and R s are optimized. • This allows the model parameters to be computed without using costly assumptions. - Abstract: This paper proposes an accurate computational technique for the two-diode model of PV module. Unlike previous methods, it does not rely on assumptions that cause the accuracy to be compromised. The key to this improvement is the implementation of a hybrid solution, i.e. by incorporating the analytical method with the differential evolution (DE) optimization technique. Three parameters, i.e. I PV , I o1 , and R p are computed analytically, while the remaining, a 1 , a 2 , I o2 and R s are optimized using the DE. To validate its accuracy, the proposed method is tested on three PV modules of different technologies: mono-crystalline, poly-crystalline and thin film. Furthermore, its performance is evaluated against two popular computational methods for the two-diode model. The proposed method is found to exhibit superior accuracy for the variation in irradiance and temperature for all module types. In particular, the improvement in accuracy is evident at low irradiance conditions; the root-mean-square error is one order of magnitude lower than that of the other methods. In addition, the values of the model parameters are consistent with the physics of PV cell. It is envisaged that the method can be very useful for PV simulation, in which accuracy of the model is of prime concern.
HYBRID WAYS OF DOING: A MODEL FOR TEACHING PUBLIC SPACE
Directory of Open Access Journals (Sweden)
Gabrielle Bendiner-Viani
2010-07-01
Full Text Available This paper addresses an exploratory practice undertaken by the authors in a co-taught class to hybridize theory, research and practice. This experiment in critical transdisciplinary design education took the form of a “critical studio + practice-based seminar on public space”, two interlinked classes co-taught by landscape architect Elliott Maltby and environmental psychologist Gabrielle Bendiner-Viani at the Parsons, The New School for Design. This design process was grounded in the political and social context of the contested East River waterfront of New York City and valued both intensive study (using a range of social science and design methods and a partnership with a local community organization, engaging with the politics, issues and human needs of a complex site. The paper considers how we encouraged interdisciplinary collaboration and dialogue between teachers as well as between liberal arts and design students and developed strategies to overcome preconceived notions of traditional “studio” and “seminar” work. By exploring the challenges and adjustments made during the semester and the process of teaching this class, this paper addresses how we moved from a model of intertwining theory, research and practice, to a hybrid model of multiple ways of doing, a model particularly apt for teaching public space. Through examples developed for and during our course, the paper suggests practical ways of supporting this transdisciplinary hybrid model.
Applications of computer modeling to fusion research
International Nuclear Information System (INIS)
Dawson, J.M.
1989-01-01
Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling
Large Scale Computations in Air Pollution Modelling
DEFF Research Database (Denmark)
Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
Computer Aided Continuous Time Stochastic Process Modelling
DEFF Research Database (Denmark)
Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay
2001-01-01
A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...
Modelling the solar wind interaction with Mercury by a quasi-neutral hybrid model
Directory of Open Access Journals (Sweden)
E. Kallio
2003-11-01
Full Text Available Quasi-neutral hybrid model is a self-consistent modelling approach that includes positively charged particles and an electron fluid. The approach has received an increasing interest in space plasma physics research because it makes it possible to study several plasma physical processes that are difficult or impossible to model by self-consistent fluid models, such as the effects associated with the ions’ finite gyroradius, the velocity difference between different ion species, or the non-Maxwellian velocity distribution function. By now quasi-neutral hybrid models have been used to study the solar wind interaction with the non-magnetised Solar System bodies of Mars, Venus, Titan and comets. Localized, two-dimensional hybrid model runs have also been made to study terrestrial dayside magnetosheath. However, the Hermean plasma environment has not yet been analysed by a global quasi-neutral hybrid model. In this paper we present a new quasi-neutral hybrid model developed to study various processes associated with the Mercury-solar wind interaction. Emphasis is placed on addressing advantages and disadvantages of the approach to study different plasma physical processes near the planet. The basic assumptions of the approach and the algorithms used in the new model are thoroughly presented. Finally, some of the first three-dimensional hybrid model runs made for Mercury are presented. The resulting macroscopic plasma parameters and the morphology of the magnetic field demonstrate the applicability of the new approach to study the Mercury-solar wind interaction globally. In addition, the real advantage of the kinetic hybrid model approach is to study the property of individual ions, and the study clearly demonstrates the large potential of the approach to address these more detailed issues by a quasi-neutral hybrid model in the future.Key words. Magnetospheric physics (planetary magnetospheres; solar wind-magnetosphere interactions – Space plasma
Properties of hybrid stars in an extended MIT bag model
International Nuclear Information System (INIS)
Bao Tmurbagan; Liu Guangzhou; Zhu Mingfeng
2009-01-01
The properties of hybrid stars are investigated in the framework of the relativistic mean field theory (RMFT) and an MIT bag model with density-dependent bag constant to describe the hadron phase (HP) and quark phase (QP), respectively. We find that the density-dependent B(ρ) decreases with baryon density ρ; this decrement makes the strange quark matter become more energetically favorable than ever; which makes the threshold densities of the hadron-quark phase transition lower than those of the original bag constant case. In this case, the hyperon degrees of freedom can not be considered. As a result, the equations of state of a star in the mixed phase (MP) become softer whereas those in the QP become stiffer, and the radii of the star obviously decrease. This indicates that the extended MIT bag model is more suitable to describe hybrid stars with small radii. (authors)
A light neutralino in hybrid models of supersymmetry breaking
Dudas, Emilian; Parmentier, Jeanne; 10.1016
2008-01-01
We show that in gauge mediation models where heavy messenger masses are provided by the adjoint Higgs field of an underlying SU(5) theory, a generalized gauge mediation spectrum arises with the characteristic feature of having a neutralino much lighter than in the standard gauge or gravity mediation schemes. This naturally fits in a hybrid scenario where gravity mediation, while subdominant with respect to gauge mediation, provides mu and B mu parameters in the TeV range.
A Novel of Hybrid Maintenance Management Models for Industrial Applications
Tahir, Zulkifli
2010-01-01
It is observed through empirical studies that the effectiveness of industrial process have been increased by a well organized of machines maintenance structure. In current research, a novel of maintenance concept has been designed by hybrid several maintenance management models with Decision Making Grid (DMG), Analytic Hierarchy Process (AHP) and Fuzzy Logic. The concept is designed for maintenance personnel to evaluate and benchmark the maintenance operations and to reveal important maintena...
A light neutralino in hybrid models of supersymmetry breaking
International Nuclear Information System (INIS)
Dudas, Emilian; Lavignac, Stephane; Parmentier, Jeanne
2009-01-01
We show that in gauge mediation models where heavy messenger masses are provided by the adjoint Higgs field of an underlying SU(5) theory, a generalized gauge mediation spectrum arises with the characteristic feature of having a neutralino LSP much lighter than in the standard gauge or gravity mediation schemes. This naturally fits in a hybrid scenario where gravity mediation, while subdominant with respect to gauge mediation, provides μ and Bμ parameters of the appropriate size for electroweak symmetry breaking
Hybrid Model for e-Learning Quality Evaluation
Directory of Open Access Journals (Sweden)
Suzana M. Savic
2012-02-01
Full Text Available E-learning is becoming increasingly important for the competitive advantage of economic organizations and higher education institutions. Therefore, it is becoming a significant aspect of quality which has to be integrated into the management system of every organization or institution. The paper examines e-learning quality characteristics, standards, criteria and indicators and presents a multi-criteria hybrid model for e-learning quality evaluation based on the method of Analytic Hierarchy Process, trend analysis, and data comparison.
Hybrid Speaker Recognition Using Universal Acoustic Model
Nishimura, Jun; Kuroda, Tadahiro
We propose a novel speaker recognition approach using a speaker-independent universal acoustic model (UAM) for sensornet applications. In sensornet applications such as “Business Microscope”, interactions among knowledge workers in an organization can be visualized by sensing face-to-face communication using wearable sensor nodes. In conventional studies, speakers are detected by comparing energy of input speech signals among the nodes. However, there are often synchronization errors among the nodes which degrade the speaker recognition performance. By focusing on property of the speaker's acoustic channel, UAM can provide robustness against the synchronization error. The overall speaker recognition accuracy is improved by combining UAM with the energy-based approach. For 0.1s speech inputs and 4 subjects, speaker recognition accuracy of 94% is achieved at the synchronization error less than 100ms.
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
where x increases from zero to N, the saturation value. Box 1. Matrix Meth- ... such as Laplace transforms and non-linear differential equa- tions with .... atomic bomb project in the. US in the early ... his work on game theory and computers.
Dodig, H.
2017-11-01
This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.
The Cheshire Cat principle applied to hybrid bag models
International Nuclear Information System (INIS)
Nielsen, H.B.; Wirzba, A.
1987-05-01
Here is argued for the Cheshire Cat point of view according to which the bag (itself) has only notational, but no physical significance. It is explained in a 1+1 dimensional exact Cheshire Cat model how a fermion can escape from the bag by means of an anomaly. We also suggest that suitably constructed hybrid bag models may be used to fix such parameters of effective Lagrangians that can otherwise be obtained from experiments only. This idea is illustrated in a calculation of the mass of the pseudoscalar η' meson in 1+1 dimension. Thus there is hope to find a construction principle for a phenomenologically sensible model. (orig.)
Hybrid model for the decay of nuclear giant resonances
International Nuclear Information System (INIS)
Hussein, M.S.
1986-12-01
The decay properties of nuclear giant multipole resonances are discussed within a hybrid model that incorporates, in a unitary consistent way, both the coherent and statistical features. It is suggested that the 'direct' decay of the GR is described with continuum first RPA and the statistical decay calculated with a modified Hauser-Feshbach model. Application is made to the decay of the giant monopole resonance in 208 Pb. Suggestions are made concerning the calculation of the mixing parameter using the statistical properties of the shell model eigenstates at high excitation energies. (Author) [pt
Rong, Bao; Rui, Xiaoting; Lu, Kun; Tao, Ling; Wang, Guoping; Ni, Xiaojun
2018-05-01
In this paper, an efficient method of dynamics modeling and vibration control design of a linear hybrid multibody system (MS) is studied based on the transfer matrix method. The natural vibration characteristics of a linear hybrid MS are solved by using low-order transfer equations. Then, by constructing the brand-new body dynamics equation, augmented operator and augmented eigenvector, the orthogonality of augmented eigenvector of a linear hybrid MS is satisfied, and its state space model expressed in each independent model space is obtained easily. According to this dynamics model, a robust independent modal space-fuzzy controller is designed for vibration control of a general MS, and the genetic optimization of some critical control parameters of fuzzy tuners is also presented. Two illustrative examples are performed, which results show that this method is computationally efficient and with perfect control performance.
Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling
Directory of Open Access Journals (Sweden)
Samar Hayat Khan Tareen
2015-07-01
Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model
ANIBAL - a Hybrid Computer Language for EAI 680-PDP 8/I, FPP 12
DEFF Research Database (Denmark)
Højberg, Kristian Søe
1974-01-01
and special hybrid computer commands. ANIBAL consists of a general-purpose analog interface subroutine ANI and the macro processor 8BAL (DECUS NO 8-497A.1). When a source program with FORTRAN and 8BAL statements is processed, the FORTRAN statements are transferred unchanged, while the 8BAL code is translated...... essentially to ANI sub-routine calls, which are defined in a macro library. The resulting code is translated by the standard FORTRAN compiler. The language is very flexible as the instructions can be changed and commands can be added to or excluded from the library ....
Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels
Directory of Open Access Journals (Sweden)
Antonino Laudani
2015-01-01
Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.
Scalability of Sustainable Business Models in Hybrid Organizations
Directory of Open Access Journals (Sweden)
Adam Jabłoński
2016-02-01
Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed
Computer-Aided Modelling Methods and Tools
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...
A Categorisation of Cloud Computing Business Models
Chang, Victor; Bacigalupo, David; Wills, Gary; De Roure, David
2010-01-01
This paper reviews current cloud computing business models and presents proposals on how organisations can achieve sustainability by adopting appropriate models. We classify cloud computing business models into eight types: (1) Service Provider and Service Orientation; (2) Support and Services Contracts; (3) In-House Private Clouds; (4) All-In-One Enterprise Cloud; (5) One-Stop Resources and Services; (6) Government funding; (7) Venture Capitals; and (8) Entertainment and Social Networking. U...
A computational model of selection by consequences.
McDowell, J J
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...
Creation of 'Ukrytie' objects computer model
International Nuclear Information System (INIS)
Mazur, A.B.; Kotlyarov, V.T.; Ermolenko, A.I.; Podbereznyj, S.S.; Postil, S.D.; Shaptala, D.V.
1999-01-01
A partial computer model of the 'Ukrytie' object was created with the use of geoinformation technologies. The computer model makes it possible to carry out information support of the works related to the 'Ukrytie' object stabilization and its conversion into ecologically safe system for analyzing, forecasting and controlling the processes occurring in the 'Ukrytie' object. Elements and structures of the 'Ukryttia' object were designed and input into the model
Computational models in physics teaching: a framework
Directory of Open Access Journals (Sweden)
Marco Antonio Moreira
2012-08-01
Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.
Numerical modeling of hybrid fiber-reinforced concrete (hyfrc)
International Nuclear Information System (INIS)
Hameed, R.; Turatsinze, A.
2015-01-01
A model for numerical simulation of mechanical response of concrete reinforced with slipping and non slipping metallic fibers in hybrid form is presented in this paper. Constitutive law used to model plain concrete behaviour is based on plasticity and damage theories, and is capable to determine localized crack opening in three dimensional (3-D) systems. Behaviour law used for slipping metallic fibers is formulated based on effective stress carried by these fibers after when concrete matrix is cracked. A continuous approach is proposed to model the effect of addition of non-slipping metallic fibers in plain concrete. This approach considers the constitutive law of concrete matrix with increased fracture energy in tension obtained experimentally in direct tension tests on Fiber Reinforced Concrete (FRC). To simulate the mechanical behaviour of hybrid fiber-reinforced concrete (HyFRC), proposed approaches to model non-slipping metallic fibers and constitutive law of plain concrete and slipping fibers are used simultaneously without any additive equation. All the parameters used by the proposed model have physical meanings and are determined through experiments or drawn from literature. The model was implemented in Finite Element (FE) Code CASTEM and tested on FRC prismatic notched specimens in flexure. Model prediction showed good agreement with experimental results. (author)
Modelling and control of a light-duty hybrid electric truck
Park, Jong-Kyu
2006-01-01
This study is concentrated on modelling and developing the controller for the light-duty hybrid electric truck. The hybrid electric vehicle has advantages in fuel economy. However, there have been relatively few studies on commercial HEVs, whilst a considerable number of studies on the hybrid electric system have been conducted in the field of passenger cars. So the current status and the methodologies to develop the LD hybrid electric truck model have been studied through the ...
Maximum Mass of Hybrid Stars in the Quark Bag Model
Alaverdyan, G. B.; Vartanyan, Yu. L.
2017-12-01
The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.
Introducing Seismic Tomography with Computational Modeling
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
MODEL APLIKASI FIKIH MUAMALAH PADA FORMULASI HYBRID CONTRACT
Directory of Open Access Journals (Sweden)
Ali Murtadho
2013-10-01
Full Text Available Modern literatures of fiqh mu’āmalah talk alot about various contract formulation with capability of maximizing profit in shariah finance industry. This new contract modification is the synthesis among existing contracts which is formulated in such a way to be an integrated contract. This formulation is known as a hybrid contract or multicontract (al-'uqūd al-murakkabah. Some of them are, bay' bi thaman 'ājil, Ijārah muntahiyah bi ’l-tamlīk dan mushārakah mutanāqiṣah. This study intends to further describe models of hybrid contract, and explore the shari'ah principles in modern financial institutions. This study found a potential shift from the ideal values of the spirit of shari'ah into the spirit of competition based shari'ah formally.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
Cai, Jizhe; Naraghi, Mohammad
2016-08-01
In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.
International Nuclear Information System (INIS)
Bhoil, Amit; Gayana, Shankramurthy; Sood, Ashwani; Bhattacharya, Anish; Mittal, Bhagwant Rai
2013-01-01
It is important to differentiate focal nodular hyperplasia (FNH), a benign condition of liver most commonly affecting women, from other neoplasm such as hepatic adenoma and metastasis. The functional reticuloendothelial features of FNH can be demonstrated by scintigraphy. We present a case of breast cancer in whom fluorodeoxyglucose positron emission tomography/computerized tomography (CT) showed a homogenous hyperdense lesion in liver, which on Tc99m sulfur colloid single-photon emission computed tomography/CT was found to have increased focal tracer uptake suggestive of FNH
Ranked retrieval of Computational Biology models.
Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar
2010-08-11
The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.
Ma, Lihong; Jin, Weimin
2018-01-01
A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker
2015-01-01
simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...
A Simple Hybrid Model for Short-Term Load Forecasting
Directory of Open Access Journals (Sweden)
Suseelatha Annamareddi
2013-01-01
Full Text Available The paper proposes a simple hybrid model to forecast the electrical load data based on the wavelet transform technique and double exponential smoothing. The historical noisy load series data is decomposed into deterministic and fluctuation components using suitable wavelet coefficient thresholds and wavelet reconstruction method. The variation characteristics of the resulting series are analyzed to arrive at reasonable thresholds that yield good denoising results. The constitutive series are then forecasted using appropriate exponential adaptive smoothing models. A case study performed on California energy market data demonstrates that the proposed method can offer high forecasting precision for very short-term forecasts, considering a time horizon of two weeks.
Calibrated and Interactive Modelling of Form-Active Hybrid Structures
DEFF Research Database (Denmark)
Quinn, Gregory; Holden Deleuran, Anders; Piker, Daniel
2016-01-01
Form-active hybrid structures (FAHS) couple two or more different structural elements of low self weight and low or negligible bending flexural stiffness (such as slender beams, cables and membranes) into one structural assembly of high global stiffness. They offer high load-bearing capacity...... software packages which introduce interruptions and data exchange issues in the modelling pipeline. The mechanical precision, stability and open software architecture of Kangaroo has facilitated the development of proof-of-concept modelling pipelines which tackle this challenge and enable powerful...... materially-informed sketching. Making use of a projection-based dynamic relaxation solver for structural analysis, explorative design has proven to be highly effective....
Simplified Model for the Hybrid Method to Design Stabilising Piles Placed at the Toe of Slopes
Directory of Open Access Journals (Sweden)
Dib M.
2018-01-01
Full Text Available Stabilizing precarious slopes by installing piles has become a widespread technique for landslides prevention. The design of slope-stabilizing piles by the finite element method is more accurate comparing to the conventional methods. This accuracy is because of the ability of this method to simulate complex configurations, and to analyze the soil-pile interaction effect. However, engineers prefer to use the simplified analytical techniques to design slope stabilizing piles, this is due to the high computational resources required by the finite element method. Aiming to combine the accuracy of the finite element method with simplicity of the analytical approaches, a hybrid methodology to design slope stabilizing piles was proposed in 2012. It consists of two steps; (1: an analytical estimation of the resisting force needed to stabilize the precarious slope, and (2: a numerical analysis to define the adequate pile configuration that offers the required resisting force. The hybrid method is applicable only for the analysis and the design of stabilizing piles placed in the middle of the slope, however, in certain cases like road constructions, piles are needed to be placed at the toe of the slope. Therefore, in this paper a simplified model for the hybrid method is dimensioned to analyze and design stabilizing piles placed at the toe of a precarious slope. The validation of the simplified model is presented by a comparative analysis with the full coupled finite element model.
A simplified computational fluid-dynamic approach to the oxidizer injector design in hybrid rockets
Di Martino, Giuseppe D.; Malgieri, Paolo; Carmicino, Carmine; Savino, Raffaele
2016-12-01
Fuel regression rate in hybrid rockets is non-negligibly affected by the oxidizer injection pattern. In this paper a simplified computational approach developed in an attempt to optimize the oxidizer injector design is discussed. Numerical simulations of the thermo-fluid-dynamic field in a hybrid rocket are carried out, with a commercial solver, to investigate into several injection configurations with the aim of increasing the fuel regression rate and minimizing the consumption unevenness, but still favoring the establishment of flow recirculation at the motor head end, which is generated with an axial nozzle injector and has been demonstrated to promote combustion stability, and both larger efficiency and regression rate. All the computations have been performed on the configuration of a lab-scale hybrid rocket motor available at the propulsion laboratory of the University of Naples with typical operating conditions. After a preliminary comparison between the two baseline limiting cases of an axial subsonic nozzle injector and a uniform injection through the prechamber, a parametric analysis has been carried out by varying the oxidizer jet flow divergence angle, as well as the grain port diameter and the oxidizer mass flux to study the effect of the flow divergence on heat transfer distribution over the fuel surface. Some experimental firing test data are presented, and, under the hypothesis that fuel regression rate and surface heat flux are proportional, the measured fuel consumption axial profiles are compared with the predicted surface heat flux showing fairly good agreement, which allowed validating the employed design approach. Finally an optimized injector design is proposed.
Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing
Directory of Open Access Journals (Sweden)
Ahmad M. Manasrah
2018-01-01
Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.
Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.
Huson, Daniel H; Linz, Simone
2018-01-01
A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.
Computational challenges in modeling gene regulatory events.
Pataskar, Abhijeet; Tiwari, Vijay K
2016-10-19
Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...
Free-piston engine linear generator for hybrid vehicles modeling study
Callahan, T. J.; Ingram, S. K.
1995-05-01
Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.
Students' Attitude in a Web-enhanced Hybrid Course: A Structural Equation Modeling Inquiry
Cheng-Chang Sam Pan; Stephen Sivo; James Brophy
2003-01-01
The present study focuses on five latent factors affecting students use of WebCT in a Web-enhanced hybrid undergraduate course at a southeastern university in the United States. An online questionnaire is used to measure a hypothetic model composed of two exogenous variables (i.e., subjective norm and computer self-efficacy), three endogenous variables (i.e., perceived ease of use, perceived usefulness, and attitude toward WebCT use), one dependent variable (i.e., actual system use), and elev...
Notions of similarity for computational biology models
Waltemath, Dagmar
2016-03-21
Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.
Notions of similarity for computational biology models
Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram
2016-01-01
Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
Amir, Sahar Z.
2017-06-09
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Amir, Sahar Z.; Chen, Huangxin; Sun, Shuyu
2017-01-01
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies
Energy Technology Data Exchange (ETDEWEB)
Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle [Telecom ParisTech, CNRS UMR 5141 LTCI, Institut TELECOM, Paris (France)
2010-01-15
Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)
Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies
International Nuclear Information System (INIS)
Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle
2010-01-01
Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)
Mohajerani, Pouyan; Ntziachristos, Vasilis
2013-07-01
The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.
Sierra toolkit computational mesh conceptual model
International Nuclear Information System (INIS)
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-01-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
Hybrid microscopic depletion model in nodal code DYN3D
International Nuclear Information System (INIS)
Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.
2016-01-01
Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.
Design, test and model of a hybrid magnetostrictive hydraulic actuator
International Nuclear Information System (INIS)
Chaudhuri, Anirban; Yoo, Jin-Hyeong; Wereley, Norman M
2009-01-01
The basic operation of hybrid hydraulic actuators involves high frequency bi-directional operation of an active material that is converted to uni-directional motion of hydraulic fluid using valves. A hybrid actuator was developed using magnetostrictive material Terfenol-D as the driving element and hydraulic oil as the working fluid. Two different lengths of Terfenol-D rod, 51 and 102 mm, with the same diameter, 12.7 mm, were used. Tests with no load and with load were carried out to measure the performance for uni-directional motion of the output piston at different pumping frequencies. The maximum no-load flow rates were 24.8 cm 3 s −1 and 22.7 cm 3 s −1 with the 51 mm and 102 mm long rods respectively, and the peaks were noted around 325 Hz pumping frequency. The blocked force of the actuator was close to 89 N in both cases. A key observation was that, at these high pumping frequencies, the inertial effects of the fluid mass dominate over the viscous effects and the problem becomes unsteady in nature. In this study, we also develop a mathematical model of the hydraulic hybrid actuator in the time domain to show the basic operational principle under varying conditions and to capture phenomena affecting system performance. Governing equations for the pumping piston and output shaft were obtained from force equilibrium considerations, while compressibility of the working fluid was taken into account by incorporating the bulk modulus. Fluid inertia was represented by a lumped parameter approach to the transmission line model, giving rise to strongly coupled ordinary differential equations. The model was then used to calculate the no-load velocities of the actuator at different pumping frequencies and simulation results were compared with experimental data for model validation
Investigating actinide compounds within a hybrid MCSCF-DFT model
International Nuclear Information System (INIS)
Fromager, E.; Jensen, H.J.A.; Wahlin, P.; Real, F.; Wahlgren, U.
2007-01-01
Complete text of publication follows: Investigations of actinide chemistry with quantum chemical methods still remain a complicated task since it requires an accurate and efficient treatment of the environment (crystal or solvent) as well as relativistic and electron correlation effects. Concerning the latter, the current correlated methods, based on either Density-Functional Theory (DFT) or Wave-Function Theory (WFT), have their advantages and drawbacks. On the one hand, Kohn-Sham DFT (KS-DFT) calculates the dynamic correlation quite accurately and at a fairly low computational cost. However, it does not treat adequately the static correlation, which is significant in some actinide compounds because of the near-degeneracy of the 5f orbitals: a first example is the bent geometry obtained in KS-DFT(B3LYP) for the neptunyl ion NpO 2 3+ , which is found to be linear within a Multi-Configurational Self-Consistent Field (MCSCF) model [1]. A second one is the stable and bent geometry obtained in KS-DFT(B3LYP) for the plutonyl ion PuO 2 4+ , which disintegrates at the MCSCF level [1]. On the other hand, WFT can describe the static correlation, using for example a MCSCF model, but then an important part of the dynamic correlation has to be neglected. This can be recovered with perturbation-theory based methods like for example CASPT2 or NEVPT2, but their computational complexity prevents large scale calculations. It is therefore of great interest to develop a hybrid MCSCF-DFT model which combines the best of both WFT and DFT approaches. The merge of WFT and DFT can be achieved by splitting the two-electron interaction into long-range and short-range parts [2]. The long-range part is then treated by WFT and the short-range part by DFT. We use the so-called 'erf' long-range interaction erf(μr 12 )/r 12 , which is based on the standard error function, and where μ is a free parameter which controls the long/short-range decomposition. The newly proposed recipe for the
International Nuclear Information System (INIS)
Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun; Zhang Guiqing
2009-01-01
In this paper, a simple, yet accurate hybrid modeling technique for condensers is presented. The method starts with fundamental physical principles but captures only few key operational characteristic parameters to predict the system performances. The advantages of the methods lie that linear or non-linear least-squares methods can be directly used to determine no more than four key operational characteristic parameters in the model, which can significantly reduce the computational burden. The developed model is verified with the experimental data taken from a pilot system. The testing results confirm that the proposed model can predict accurately the performance of the real-time operating condenser with the maximum error of less than ±10%. The model technique proposed will have wide applications not only in condenser operating optimization, but also in performance assessment and fault detection and diagnosis.
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Improved Hybrid Modeling of Spent Fuel Storage Facilities
Energy Technology Data Exchange (ETDEWEB)
Bibber, Karl van [Univ. of California, Berkeley, CA (United States)
2018-03-21
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculations through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ_{Ω}, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work
A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus
Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.
2017-11-01
The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.
Hybrid modeling of plasma and applications to fusion and space physics
International Nuclear Information System (INIS)
Kazeminejad, F.
1989-01-01
Obtaining reasonable solutions to the nonlinear equations is crucial to the understanding of the behavior of plasmas. With the advent of high speed computers, computer modeling of plasmas has moved into the front row of the tools used in research of their nonlinear plasma dynamics. There are roughly speaking two types of plasma models, particle models and fluid models. Particle models try to emulate nature by following the motion of a large number of charged particles in their self consistent electromagnetic fields. Fluid models on the other hand use macroscopic fluid equations to model the plasma. MHD models are typical of this type. Particle models in general require larger memory for the computer due to the massive amount of data associated with the particles' kinematical variables. Particle models are generally limited to studying small regions of plasma for relatively short time intervals. Fluid models are better fit to handle large scales and long times; i.e., quite often the complete plasma involved in an experiment. The drawback of the fluid models however is that, they miss the physical phenomenon taking place at the microscale and these phenomenon can influence the properties of fluid. Another approach is to start with fluid models and incorporate more physics. Such models are referred to as hybrid models. In this thesis, two such models are discussed. They are then applied to two problems; the first is a simulation of the artificial comet generated by the AMPTE experiment; the second is the production of enhanced noise in fusion plasmas by injected energetic ions or by fusion reaction products. In both cases the models demonstrate qualitative agreement with the experimental observations
A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran
International Nuclear Information System (INIS)
Mostafavi, Elham Sadat; Ramiyani, Sara Saeidi; Sarvar, Rahim; Moud, Hashem Izadi; Mousavi, Seyyed Mohammad
2013-01-01
This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.
Computational Modeling of Culture's Consequences
Hofstede, G.J.; Jonker, C.M.; Verwaart, T.
2010-01-01
This paper presents an approach to formalize the influence of culture on the decision functions of agents in social simulations. The key components are (a) a definition of the domain of study in the form of a decision model, (b) knowledge acquisition based on a dimensional theory of culture,
Computational modeling of concrete flow
DEFF Research Database (Denmark)
Roussel, Nicolas; Geiker, Mette Rica; Dufour, Frederic
2007-01-01
particle flow, and numerical techniques allowing the modeling of particles suspended in a fluid. The general concept behind each family of techniques is described. Pros and cons for each technique are given along with examples and references to applications to fresh cementitious materials....
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2017-08-01
Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Computer Modeling of Direct Metal Laser Sintering
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Visual and Computational Modelling of Minority Games
Directory of Open Access Journals (Sweden)
Robertas Damaševičius
2017-02-01
Full Text Available The paper analyses the Minority Game and focuses on analysis and computational modelling of several variants (variable payoff, coalition-based and ternary voting of Minority Game using UAREI (User-Action-Rule-Entities-Interface model. UAREI is a model for formal specification of software gamification, and the UAREI visual modelling language is a language used for graphical representation of game mechanics. The URAEI model also provides the embedded executable modelling framework to evaluate how the rules of the game will work for the players in practice. We demonstrate flexibility of UAREI model for modelling different variants of Minority Game rules for game design.
Model to Implement Virtual Computing Labs via Cloud Computing Services
Directory of Open Access Journals (Sweden)
Washington Luna Encalada
2017-07-01
Full Text Available In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs, and bring your own device (BYOD are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS. Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning
Numerical modeling of lower hybrid heating and current drive
International Nuclear Information System (INIS)
Valeo, E.J.; Eder, D.C.
1986-03-01
The generation of currents in toroidal plasma by application of waves in the lower hybrid frequency range involves the interplay of several physical phenomena which include: wave propagation in toroidal geometry, absorption via wave-particle resonances, the quasilinear generation of strongly nonequilibrium electron and ion distribution functions, and the self-consistent evolution of the current density in such a nonequilibrium plasma. We describe a code, LHMOD, which we have developed to treat these aspects of current drive and heating in tokamaks. We present results obtained by applying the code to a computation of current ramp-up and to an investigation of the possible importance of minority hydrogen absorption in a deuterium plasma as the ''density limit'' to current drive is approached
A viable D-term hybrid inflation model
Kadota, Kenji; Kobayashi, Tatsuo; Sumita, Keigo
2017-11-01
We propose a new model of the D-term hybrid inflation in the framework of supergravity. Although our model introduces, analogously to the conventional D-term inflation, the inflaton and a pair of scalar fields charged under a U(1) gauge symmetry, we study the logarithmic and exponential dependence on the inflaton field, respectively, for the Kähler and superpotential. This results in a characteristic one-loop scalar potential consisting of linear and exponential terms, which realizes the small-field inflation dominated by the Fayet-Iliopoulos term. With the reasonable values for the coupling coefficients and, in particular, with the U(1) gauge coupling constant comparable to that of the Standard Model, our D-term inflation model can solve the notorious problems in the conventional D-term inflation, namely, the CMB constraints on the spectral index and the generation of cosmic strings.
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Directory of Open Access Journals (Sweden)
Yonghe Zhang
2010-11-01
Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.
A hybrid spatiotemporal drought forecasting model for operational use
Vasiliades, L.; Loukas, A.
2010-09-01
Drought forecasting plays an important role in the planning and management of natural resources and water resource systems in a river basin. Early and timelines forecasting of a drought event can help to take proactive measures and set out drought mitigation strategies to alleviate the impacts of drought. Spatiotemporal data mining is the extraction of unknown and implicit knowledge, structures, spatiotemporal relationships, or patterns not explicitly stored in spatiotemporal databases. As one of data mining techniques, forecasting is widely used to predict the unknown future based upon the patterns hidden in the current and past data. This study develops a hybrid spatiotemporal scheme for integrated spatial and temporal forecasting. Temporal forecasting is achieved using feed-forward neural networks and the temporal forecasts are extended to the spatial dimension using a spatial recurrent neural network model. The methodology is demonstrated for an operational meteorological drought index the Standardized Precipitation Index (SPI) calculated at multiple timescales. 48 precipitation stations and 18 independent precipitation stations, located at Pinios river basin in Thessaly region, Greece, were used for the development and spatiotemporal validation of the hybrid spatiotemporal scheme. Several quantitative temporal and spatial statistical indices were considered for the performance evaluation of the models. Furthermore, qualitative statistical criteria based on contingency tables between observed and forecasted drought episodes were calculated. The results show that the lead time of forecasting for operational use depends on the SPI timescale. The hybrid spatiotemporal drought forecasting model could be operationally used for forecasting up to three months ahead for SPI short timescales (e.g. 3-6 months) up to six months ahead for large SPI timescales (e.g. 24 months). The above findings could be useful in developing a drought preparedness plan in the region.
Computational modeling of epiphany learning.
Chen, Wei James; Krajbich, Ian
2017-05-02
Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these "epiphanies" has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur. We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy. Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn. We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all. Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.
On The Modelling Of Hybrid Aerostatic - Gas Journal Bearings
DEFF Research Database (Denmark)
Morosi, Stefano; Santos, Ilmar
2011-01-01
modeling for hybrid lubrication of a compressible fluid film journal bearing. Additional forces are generated by injecting pressurized air into the bearing gap through orifices located on the bearing walls. A modified form of the compressible Reynolds equation for active lubrication is derived. By solving......Gas journal bearing have been increasingly adopted in modern turbo-machinery applications, as they meet the demands of operation at higher rotational speeds, in clean environment and great efficiency. Due to the fact that gaseous lubricants, typically air, have much lower viscosity than more...
Active diagnosis of hybrid systems - A model predictive approach
DEFF Research Database (Denmark)
Tabatabaeipour, Seyed Mojtaba; Ravn, Anders P.; Izadi-Zamanabadi, Roozbeh
2009-01-01
A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and fault...... can be used as a test signal for sanity check at the commissioning or for detection of faults hidden by regulatory actions of the controller. The method is tested on the two tank benchmark example. ©2009 IEEE....
Software development infrastructure for the HYBRID modeling and simulation project
International Nuclear Information System (INIS)
Epiney, Aaron S.; Kinoshita, Robert A.; Kim, Jong Suk; Rabiti, Cristian; Greenwood, M. Scott
2016-01-01
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers
The Hybrid Airline Model. Generating Quality for Passengers
Directory of Open Access Journals (Sweden)
Bogdan AVRAM
2017-12-01
Full Text Available This research aims to investigate the different strategies adopted by the airline companies in adapting to the ongoing changes while developing products and services for passengers in order to increase their yield, load factor and passenger satisfaction. Finding a balance between costs and services quality in the airline industry is a crucial task for every airline wanting to gain a competitive advantage on the market. Also, the rise of the hybrid business operating model has brought up many challenges for airlines as the line between legacy carriers and low-cost carriers is getting thinner in terms of costs and innovative ideas to create a superior product for the passengers.
Software development infrastructure for the HYBRID modeling and simulation project
Energy Technology Data Exchange (ETDEWEB)
Epiney, Aaron S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinoshita, Robert A. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kim, Jong Suk [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Greenwood, M. Scott [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
One of the goals of the HYBRID modeling and simulation project is to assess the economic viability of hybrid systems in a market that contains renewable energy sources like wind. The idea is that it is possible for the nuclear plant to sell non-electric energy cushions, which absorb (at least partially) the volatility introduced by the renewable energy sources. This system is currently modeled in the Modelica programming language. To assess the economics of the system, an optimization procedure is trying to find the minimal cost of electricity production. The RAVEN code is used as a driver for the whole problem. It is assumed that at this stage, the HYBRID modeling and simulation framework can be classified as non-safety “research and development” software. The associated quality level is Quality Level 3 software. This imposes low requirements on quality control, testing and documentation. The quality level could change as the application development continues.Despite the low quality requirement level, a workflow for the HYBRID developers has been defined that include a coding standard and some documentation and testing requirements. The repository performs automated unit testing of contributed models. The automated testing is achieved via an open-source python script called BuildingsP from Lawrence Berkeley National Lab. BuildingsPy runs Modelica simulation tests using Dymola in an automated manner and generates and runs unit tests from Modelica scripts written by developers. In order to assure effective communication between the different national laboratories a biweekly videoconference has been set-up, where developers can report their progress and issues. In addition, periodic face-face meetings are organized intended to discuss high-level strategy decisions with management. A second means of communication is the developer email list. This is a list to which everybody can send emails that will be received by the collective of the developers and managers
Modeling and Implementation of a 1 kW, Air Cooled HTPEM Fuel Cell in a Hybrid Electrical Vehicle
DEFF Research Database (Denmark)
Andreasen, Søren Juhl; Ashworth, Leanne; Remón, Ian Natanael
2008-01-01
This work is a preliminary study of using the PBI-based, HTPEM fuel cell technology in automotive applications. This issue was investigated through computational modeling and an experimental investigation. A hybrid fuel cell system, consisting of a 1 kW stack and lead acid batteries, was implemen......This work is a preliminary study of using the PBI-based, HTPEM fuel cell technology in automotive applications. This issue was investigated through computational modeling and an experimental investigation. A hybrid fuel cell system, consisting of a 1 kW stack and lead acid batteries......, was implemented in a small electrical vehicle. A dynamic model was developed using Matlab-Simulink to describe the system characteristics, select operating conditions and to size system components. Preheating of the fuel cell stack with electrical resistors was investigated and found to be an unrealistic approach...
1979-01-01
A description and listing is presented of two computer programs: Hybrid Vehicle Design Program (HYVELD) and Hybrid Vehicle Simulation Program (HYVEC). Both of the programs are modifications and extensions of similar programs developed as part of the Electric and Hybrid Vehicle System Research and Development Project.
A Lookahead Behavior Model for Multi-Agent Hybrid Simulation
Directory of Open Access Journals (Sweden)
Mei Yang
2017-10-01
Full Text Available In the military field, multi-agent simulation (MAS plays an important role in studying wars statistically. For a military simulation system, which involves large-scale entities and generates a very large number of interactions during the runtime, the issue of how to improve the running efficiency is of great concern for researchers. Current solutions mainly use hybrid simulation to gain fewer updates and synchronizations, where some important continuous models are maintained implicitly to keep the system dynamics, and partial resynchronization (PR is chosen as the preferable state update mechanism. However, problems, such as resynchronization interval selection and cyclic dependency, remain unsolved in PR, which easily lead to low update efficiency and infinite looping of the state update process. To address these problems, this paper proposes a lookahead behavior model (LBM to implement a PR-based hybrid simulation. In LBM, a minimal safe time window is used to predict the interactions between implicit models, upon which the resynchronization interval can be efficiently determined. Moreover, the LBM gives an estimated state value in the lookahead process so as to break the state-dependent cycle. The simulation results show that, compared with traditional mechanisms, LBM requires fewer updates and synchronizations.
Causality in Psychiatry: A Hybrid Symptom Network Construct Model
Directory of Open Access Journals (Sweden)
Gerald eYoung
2015-11-01
Full Text Available Causality or etiology in psychiatry is marked by standard biomedical, reductionistic models (symptoms reflect the construct involved that inform approaches to nosology, or classification, such as in the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition; American Psychiatric Association, 2013. However, network approaches to symptom interaction (i.e., symptoms are formative of the construct; e.g., McNally, Robinaugh, Wu, Wang, Deserno, & Borsboom, 2014, for PTSD (posttraumatic stress disorder are being developed that speak to bottom-up processes in mental disorder, in contrast to the typical top-down psychological construct approach. The present article presents a hybrid top-down, bottom-up model of the relationship between symptoms and mental disorder, viewing symptom expression and their causal complex as a reciprocally dynamic system with multiple levels, from lower-order symptoms in interaction to higher-order constructs affecting them. The hybrid model hinges on good understanding of systems theory in which it is embedded, so that the article reviews in depth nonlinear dynamical systems theory (NLDST. The article applies the concept of emergent circular causality (Young, 2011 to symptom development, as well. Conclusions consider that symptoms vary over several dimensions, including: subjectivity; objectivity; conscious motivation effort; and unconscious influences, and the degree to which individual (e.g., meaning and universal (e.g., causal processes are involved. The opposition between science and skepticism is a complex one that the article addresses in final comments.
Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna
2017-08-01
Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
International Nuclear Information System (INIS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran
Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.
Directory of Open Access Journals (Sweden)
Zhiwen Yu
Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.
Hybrid Modeling Method for a DEP Based Particle Manipulation
Directory of Open Access Journals (Sweden)
Mohamad Sawan
2013-01-01
Full Text Available In this paper, a new modeling approach for Dielectrophoresis (DEP based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Hybrid Donor-Dot Devices made using Top-down Ion Implantation for Quantum Computing
Bielejec, Edward; Bishop, Nathan; Carroll, Malcolm
2012-02-01
We present progress towards fabricating hybrid donor -- quantum dots (QD) for quantum computing. These devices will exploit the long coherence time of the donor system and the surface state manipulation associated with a QD. Fabrication requires detection of single ions implanted with 10's of nanometer precision. We show in this talk, 100% detection efficiency for single ions using a single ion Geiger mode avalanche (SIGMA) detector integrated into a Si MOS QD process flow. The NanoImplanter (nI) a focused ion beam system is used for precision top-down placement of the implanted ion. This machine has a 10 nm resolution combined with a mass velocity filter, allowing for the use of multi-species liquid metal ion sources (LMIS) to implant P and Sb ions, and a fast blanking and chopping system for single ion implants. The combination of the nI and integration of the SIGMA with the MOS QD process flow establishes a path to fabricate hybrid single donor-dot devices. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Requirements for Control Room Computer-Based Procedures for use in Hybrid Control Rooms
Energy Technology Data Exchange (ETDEWEB)
Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-05-01
Many plants in the U.S. are currently undergoing control room modernization. The main drivers for modernization are the aging and obsolescence of existing equipment, which typically results in a like-for-like replacement of analogue equipment with digital systems. However, the modernization efforts present an opportunity to employ advanced technology that would not only extend the life, but enhance the efficiency and cost competitiveness of nuclear power. Computer-based procedures (CBPs) are one example of near-term advanced technology that may provide enhanced efficiencies above and beyond like for like replacements of analog systems. Researchers in the LWRS program are investigating the benefits of advanced technologies such as CBPs, with the goal of assisting utilities in decision making during modernization projects. This report will describe the existing research on CBPs, discuss the unique issues related to using CBPs in hybrid control rooms (i.e., partially modernized analog control rooms), and define the requirements of CBPs for hybrid control rooms.
Directory of Open Access Journals (Sweden)
Jalalifar Mehran
2007-01-01
Full Text Available In this paper using adaptive backstepping approach an adaptive rotor flux observer which provides stator and rotor resistances estimation simultaneously for induction motor used in series hybrid electric vehicle is proposed. The controller of induction motor (IM is designed based on input-output feedback linearization technique. Combining this controller with adaptive backstepping observer the system is robust against rotor and stator resistances uncertainties. In additional, mechanical components of a hybrid electric vehicle are called from the Advanced Vehicle Simulator Software Library and then linked with the electric motor. Finally, a typical series hybrid electric vehicle is modeled and investigated. Various tests, such as acceleration traversing ramp, and fuel consumption and emission are performed on the proposed model of a series hybrid vehicle. Computer simulation results obtained, confirm the validity and performance of the proposed IM control approach using for series hybrid electric vehicle.
Computer Modelling «Smart Building»
Directory of Open Access Journals (Sweden)
O. Yu. Maryasin
2016-01-01
Full Text Available Currently ”Smart building” or ”Smart house” technology is developing actively in industrialized countries. The main idea of ”smart building” or ”smart house” is to have a system which is able to identify definite situations happening in house and respond accordingly. Automated house management system is made for automated control and management and also for organization of interaction between separated systems of engineering equipment. This system includes automation subsystems of one or another engineering equipment as separated components. In order to perform study of different functioning modes of engineering subsystems and the whole system, mathematical and computer modeling needs to be used. From mathematical point of veiw description of ”Smart building” is a continuous-discrete or hybrid system consisting of interacting elements of different nature, whose behavior is described by continuous and discrete processes. In the article the authors present a computer model ”Smart building” which allows to model the work of main engineering subsystems and management algorithms. The model is created in Simulink Matlab system with ”physical modeling” library Simscape and Stateflow library. The peculiarity of this model is the use of specialized management and control algorithms which allow providing coordinated interaction of subsystems and optimizing power consumption.
Computational modeling of human oral bioavailability: what will be next?
Cabrera-Pérez, Miguel Ángel; Pham-The, Hai
2018-06-01
The oral route is the most convenient way of administrating drugs. Therefore, accurate determination of oral bioavailability is paramount during drug discovery and development. Quantitative structure-property relationship (QSPR), rule-of-thumb (RoT) and physiologically based-pharmacokinetic (PBPK) approaches are promising alternatives to the early oral bioavailability prediction. Areas covered: The authors give insight into the factors affecting bioavailability, the fundamental theoretical framework and the practical aspects of computational methods for predicting this property. They also give their perspectives on future computational models for estimating oral bioavailability. Expert opinion: Oral bioavailability is a multi-factorial pharmacokinetic property with its accurate prediction challenging. For RoT and QSPR modeling, the reliability of datasets, the significance of molecular descriptor families and the diversity of chemometric tools used are important factors that define model predictability and interpretability. Likewise, for PBPK modeling the integrity of the pharmacokinetic data, the number of input parameters, the complexity of statistical analysis and the software packages used are relevant factors in bioavailability prediction. Although these approaches have been utilized independently, the tendency to use hybrid QSPR-PBPK approaches together with the exploration of ensemble and deep-learning systems for QSPR modeling of oral bioavailability has opened new avenues for development promising tools for oral bioavailability prediction.
Computational models of airway branching morphogenesis.
Varner, Victor D; Nelson, Celeste M
2017-07-01
The bronchial network of the mammalian lung consists of millions of dichotomous branches arranged in a highly complex, space-filling tree. Recent computational models of branching morphogenesis in the lung have helped uncover the biological mechanisms that construct this ramified architecture. In this review, we focus on three different theoretical approaches - geometric modeling, reaction-diffusion modeling, and continuum mechanical modeling - and discuss how, taken together, these models have identified the geometric principles necessary to build an efficient bronchial network, as well as the patterning mechanisms that specify airway geometry in the developing embryo. We emphasize models that are integrated with biological experiments and suggest how recent progress in computational modeling has advanced our understanding of airway branching morphogenesis. Copyright © 2016 Elsevier Ltd. All rights reserved.
The influence of nonlocal hybridization on ground-state properties of the Falicov-Kimball model
International Nuclear Information System (INIS)
Farkasovsky, Pavol
2005-01-01
The density matrix renormalization group is used to examine effects of nonlocal hybridization on ground-state properties of the Falicov-Kimball model (FKM) in one dimension. Special attention is devoted to the problem of hybridization-induced insulator-metal transition. It is shown that the picture of insulator-metal transitions found for the FKM with nonlocal hybridization strongly differs from one found for the FKM without hybridization (as well as with local hybridization). The effect of nonlocal hybridization is so strong that it can induce the insulator-metal transition, even in the half-filled band case where the ground states of the FKM without hybridization are insulating for all finite Coulomb interactions. Outside the half-filled band case the metal-insulator transition driven by pressure is found for finite values of nonlocal hybridization
Computational multiscale modeling of intergranular cracking
International Nuclear Information System (INIS)
Simonovski, Igor; Cizelj, Leon
2011-01-01
A novel computational approach for simulation of intergranular cracks in a polycrystalline aggregate is proposed in this paper. The computational model includes a topological model of the experimentally determined microstructure of a 400 μm diameter stainless steel wire and automatic finite element discretization of the grains and grain boundaries. The microstructure was spatially characterized by X-ray diffraction contrast tomography and contains 362 grains and some 1600 grain boundaries. Available constitutive models currently include isotropic elasticity for the grain interior and cohesive behavior with damage for the grain boundaries. The experimentally determined lattice orientations are employed to distinguish between resistant low energy and susceptible high energy grain boundaries in the model. The feasibility and performance of the proposed computational approach is demonstrated by simulating the onset and propagation of intergranular cracking. The preliminary numerical results are outlined and discussed.
Modeling multimodal human-computer interaction
Obrenovic, Z.; Starcevic, D.
2004-01-01
Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze
A Computational Model of Selection by Consequences
McDowell, J. J.
2004-01-01
Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…
Generating Computational Models for Serious Gaming
Westera, Wim
2018-01-01
Many serious games include computational models that simulate dynamic systems. These models promote enhanced interaction and responsiveness. Under the social web paradigm more and more usable game authoring tools become available that enable prosumers to create their own games, but the inclusion of
Dynamic Modeling and Simulation of a Switched Reluctance Motor in a Series Hybrid Electric Vehicle
Siavash Sadeghi; Mojtaba Mirsalim; Arash Hassanpour Isfahani
2010-01-01
Dynamic behavior analysis of electric motors is required in order to accuratelyevaluate the performance, energy consumption and pollution level of hybrid electricvehicles. Simulation tools for hybrid electric vehicles are divided into steady state anddynamic models. Tools with steady-state models are useful for system-level analysiswhereas tools that utilize dynamic models give in-depth information about the behavior ofsublevel components. For the accurate prediction of hybrid electric vehicl...
A hybrid computational method for the discovery of novel reproduction-related genes.
Chen, Lei; Chu, Chen; Kong, Xiangyin; Huang, Guohua; Huang, Tao; Cai, Yu-Dong
2015-01-01
Uncovering the molecular mechanisms underlying reproduction is of great importance to infertility treatment and to the generation of healthy offspring. In this study, we discovered novel reproduction-related genes with a hybrid computational method, integrating three different types of method, which offered new clues for further reproduction research. This method was first executed on a weighted graph, constructed based on known protein-protein interactions, to search the shortest paths connecting any two known reproduction-related genes. Genes occurring in these paths were deemed to have a special relationship with reproduction. These newly discovered genes were filtered with a randomization test. Then, the remaining genes were further selected according to their associations with known reproduction-related genes measured by protein-protein interaction score and alignment score obtained by BLAST. The in-depth analysis of the high confidence novel reproduction genes revealed hidden mechanisms of reproduction and provided guidelines for further experimental validations.
A hybrid computation method for determining fluctuations of temperature in branched structures
International Nuclear Information System (INIS)
Czomber, L.
1982-01-01
A hybrid computation method for determining temperature fluctuations at discrete points of slab like geometries is developed on the basis of a new formulation of the finite difference method. For this purpose, a new finite difference method is combined with an exact solution of the heat equation within the range of values of the Laplace transformation. Whereas the exact solution can be applied to arbitraryly large ranges, the finite difference formulation is given for structural ranges which need finer discretization. The boundary conditions of the exact solution are substituted by finite difference terms for the boundary residual flow or an internal heat source, depending on the problem. The resulting system of conditional equations contains only the node parameters of the finite difference method. (orig.) [de
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results. PMID:24083133
A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.
Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh
2013-01-01
Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.
Insect-computer hybrid legged robot with user-adjustable speed, step length and walking gait.
Cao, Feng; Zhang, Chao; Choo, Hao Yu; Sato, Hirotaka
2016-03-01
We have constructed an insect-computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g., gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e., applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. © 2016 The Author(s).
An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model
Directory of Open Access Journals (Sweden)
Yan Guo
2017-10-01
Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.
A Probability-Based Hybrid User Model for Recommendation System
Directory of Open Access Journals (Sweden)
Jia Hao
2016-01-01
Full Text Available With the rapid development of information communication technology, the available information or knowledge is exponentially increased, and this causes the well-known information overload phenomenon. This problem is more serious in product design corporations because over half of the valuable design time is consumed in knowledge acquisition, which highly extends the design cycle and weakens the competitiveness. Therefore, the recommender systems become very important in the domain of product domain. This research presents a probability-based hybrid user model, which is a combination of collaborative filtering and content-based filtering. This hybrid model utilizes user ratings and item topics or classes, which are available in the domain of product design, to predict the knowledge requirement. The comprehensive analysis of the experimental results shows that the proposed method gains better performance in most of the parameter settings. This work contributes a probability-based method to the community for implement recommender system when only user ratings and item topics are available.
Hybrid quantum-classical modeling of quantum dot devices
Kantner, Markus; Mittnenzweig, Markus; Koprucki, Thomas
2017-11-01
The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semiclassical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: it enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.
Axelrod Model of Social Influence with Cultural Hybridization
Radillo-Díaz, Alejandro; Pérez, Luis A.; Del Castillo-Mussot, Marcelo
2012-10-01
Since cultural interactions between a pair of social agents involve changes in both individuals, we present simulations of a new model based on Axelrod's homogenization mechanism that includes hybridization or mixture of the agents' features. In this new hybridization model, once a cultural feature of a pair of agents has been chosen for the interaction, the average of the values for this feature is reassigned as the new value for both agents after interaction. Moreover, a parameter representing social tolerance is implemented in order to quantify whether agents are similar enough to engage in interaction, as well as to determine whether they belong to the same cluster of similar agents after the system has reached the frozen state. The transitions from a homogeneous state to a fragmented one decrease in abruptness as tolerance is increased. Additionally, the entropy associated to the system presents a maximum within the transition, the width of which increases as tolerance does. Moreover, a plateau was found inside the transition for a low-tolerance system of agents with only two cultural features.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
Vallat, Brinda; Webb, Benjamin; Westbrook, John D; Sali, Andrej; Berman, Helen M
2018-04-09
Essential processes in biology are carried out by large macromolecular assemblies, whose structures are often difficult to determine by traditional methods. Increasingly, researchers combine measured data and computed information from several complementary methods to obtain "hybrid" or "integrative" structural models of macromolecules and their assemblies. These integrative/hybrid (I/H) models are not archived in the PDB because of the absence of standard data representations and processing mechanisms. Here we present the development of data standards and a prototype system for archiving I/H models. The data standards provide the definitions required for representing I/H models that span multiple spatiotemporal scales and conformational states, as well as spatial restraints derived from different experimental techniques. Based on these data definitions, we have built a prototype system called PDB-Dev, which provides the infrastructure necessary to archive I/H structural models. PDB-Dev is now accepting structures and is open to the community for new submissions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Elsheikh, Ahmed H.
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.
Modelling the solar wind interaction with Mercury by a quasi-neutral hybrid model
Directory of Open Access Journals (Sweden)
E. Kallio
Full Text Available Quasi-neutral hybrid model is a self-consistent modelling approach that includes positively charged particles and an electron fluid. The approach has received an increasing interest in space plasma physics research because it makes it possible to study several plasma physical processes that are difficult or impossible to model by self-consistent fluid models, such as the effects associated with the ions’ finite gyroradius, the velocity difference between different ion species, or the non-Maxwellian velocity distribution function. By now quasi-neutral hybrid models have been used to study the solar wind interaction with the non-magnetised Solar System bodies of Mars, Venus, Titan and comets. Localized, two-dimensional hybrid model runs have also been made to study terrestrial dayside magnetosheath. However, the Hermean plasma environment has not yet been analysed by a global quasi-neutral hybrid model.
In this paper we present a new quasi-neutral hybrid model developed to study various processes associated with the Mercury-solar wind interaction. Emphasis is placed on addressing advantages and disadvantages of the approach to study different plasma physical processes near the planet. The basic assumptions of the approach and the algorithms used in the new model are thoroughly presented. Finally, some of the first three-dimensional hybrid model runs made for Mercury are presented.
The resulting macroscopic plasma parameters and the morphology of the magnetic field demonstrate the applicability of the new approach to study the Mercury-solar wind interaction globally. In addition, the real advantage of the kinetic hybrid model approach is to study the property of individual ions, and the study clearly demonstrates the large potential of the approach to address these more detailed issues by a quasi-neutral hybrid model in the future.
Key words. Magnetospheric physics
Security Management Model in Cloud Computing Environment
Ahmadpanah, Seyed Hossein
2016-01-01
In the cloud computing environment, cloud virtual machine (VM) will be more and more the number of virtual machine security and management faced giant Challenge. In order to address security issues cloud computing virtualization environment, this paper presents a virtual machine based on efficient and dynamic deployment VM security management model state migration and scheduling, study of which virtual machine security architecture, based on AHP (Analytic Hierarchy Process) virtual machine de...
Ewe: a computer model for ultrasonic inspection
International Nuclear Information System (INIS)
Douglas, S.R.; Chaplin, K.R.
1991-11-01
The computer program EWE simulates the propagation of elastic waves in solids and liquids. It has been applied to ultrasonic testing to study the echoes generated by cracks and other types of defects. A discussion of the elastic wave equations is given, including the first-order formulation, shear and compression waves, surface waves and boundaries, numerical method of solution, models for cracks and slot defects, input wave generation, returning echo construction, and general computer issues
Light reflection models for computer graphics.
Greenberg, D P
1989-04-14
During the past 20 years, computer graphic techniques for simulating the reflection of light have progressed so that today images of photorealistic quality can be produced. Early algorithms considered direct lighting only, but global illumination phenomena with indirect lighting, surface interreflections, and shadows can now be modeled with ray tracing, radiosity, and Monte Carlo simulations. This article describes the historical development of computer graphic algorithms for light reflection and pictorially illustrates what will be commonly available in the near future.
Finite difference computing with exponential decay models
Langtangen, Hans Petter
2016-01-01
This text provides a very simple, initial introduction to the complete scientific computing pipeline: models, discretization, algorithms, programming, verification, and visualization. The pedagogical strategy is to use one case study – an ordinary differential equation describing exponential decay processes – to illustrate fundamental concepts in mathematics and computer science. The book is easy to read and only requires a command of one-variable calculus and some very basic knowledge about computer programming. Contrary to similar texts on numerical methods and programming, this text has a much stronger focus on implementation and teaches testing and software engineering in particular. .
Do's and Don'ts of Computer Models for Planning
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
Directory of Open Access Journals (Sweden)
Jinyi Long
2017-01-01
Full Text Available The hybrid brain computer interface (BCI based on motor imagery (MI and P300 has been a preferred strategy aiming to improve the detection performance through combining the features of each. However, current methods used for combining these two modalities optimize them separately, which does not result in optimal performance. Here, we present an efficient framework to optimize them together by concatenating the features of MI and P300 in a block diagonal form. Then a linear classifier under a dual spectral norm regularizer is applied to the combined features. Under this framework, the hybrid features of MI and P300 can be learned, selected, and combined together directly. Experimental results on the data set of hybrid BCI based on MI and P300 are provided to illustrate competitive performance of the proposed method against other conventional methods. This provides an evidence that the method used here contributes to the discrimination performance of the brain state in hybrid BCI.
Quantum Vertex Model for Reversible Classical Computing
Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng
We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.
Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters
Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel
2010-01-01
HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.
Using a hybrid neuron in physiologically inspired models of the basal ganglia
Directory of Open Access Journals (Sweden)
Corey Michael Thibeault
2013-07-01
Full Text Available Our current understanding of the basal ganglia has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the basal ganglia however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the basal ganglia, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation. The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under deep brain stimulation. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of deep brain stimulation and the latter allowing for the efficient simulation of larger more comprehensive networks.
An efficient soil water balance model based on hybrid numerical and statistical methods
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new
Computational model for simulation small testing launcher, technical solution
Energy Technology Data Exchange (ETDEWEB)
Chelaru, Teodor-Viorel, E-mail: teodor.chelaru@upb.ro [University POLITEHNICA of Bucharest - Research Center for Aeronautics and Space, Str. Ghe Polizu, nr. 1, Bucharest, Sector 1 (Romania); Cristian, Barbu, E-mail: barbucr@mta.ro [Military Technical Academy, Romania, B-dul. George Coşbuc, nr. 81-83, Bucharest, Sector 5 (Romania); Chelaru, Adrian, E-mail: achelaru@incas.ro [INCAS -National Institute for Aerospace Research Elie Carafoli, B-dul Iuliu Maniu 220, 061126, Bucharest, Sector 6 (Romania)
2014-12-10
The purpose of this paper is to present some aspects regarding the computational model and technical solutions for multistage suborbital launcher for testing (SLT) used to test spatial equipment and scientific measurements. The computational model consists in numerical simulation of SLT evolution for different start conditions. The launcher model presented will be with six degrees of freedom (6DOF) and variable mass. The results analysed will be the flight parameters and ballistic performances. The discussions area will focus around the technical possibility to realize a small multi-stage launcher, by recycling military rocket motors. From technical point of view, the paper is focused on national project 'Suborbital Launcher for Testing' (SLT), which is based on hybrid propulsion and control systems, obtained through an original design. Therefore, while classical suborbital sounding rockets are unguided and they use as propulsion solid fuel motor having an uncontrolled ballistic flight, SLT project is introducing a different approach, by proposing the creation of a guided suborbital launcher, which is basically a satellite launcher at a smaller scale, containing its main subsystems. This is why the project itself can be considered an intermediary step in the development of a wider range of launching systems based on hybrid propulsion technology, which may have a major impact in the future European launchers programs. SLT project, as it is shown in the title, has two major objectives: first, a short term objective, which consists in obtaining a suborbital launching system which will be able to go into service in a predictable period of time, and a long term objective that consists in the development and testing of some unconventional sub-systems which will be integrated later in the satellite launcher as a part of the European space program. This is why the technical content of the project must be carried out beyond the range of the existing suborbital
Computational disease modeling – fact or fiction?
Directory of Open Access Journals (Sweden)
Stephan Klaas
2009-06-01
Full Text Available Abstract Background Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably essential relevance to the phenomena of interest and combines available data in models of modest complexity. Results The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling. Conclusion During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.
Electromagnetic moments of hadrons and quarks in a hybrid model
International Nuclear Information System (INIS)
Gerasimov, S.B.
1989-01-01
Magnetic moments of baryons are analyzed on the basis of general sum rules following from the theory of broken symmetries and quark models including the relativistic effects and hadronic corrections due to the meson exchange currents. A new sum rule is proposed for the hyperon magnetic moments, which is in accord with the most precise new data and also with a theory of the electromagnetic ΛΣ 0 mixing. The numerical values of the quark electromagnetic moments are obtained within a hybrid model treating the pion cloud effects through the local coupling of the pion field with the constituent massive quarks. Possible sensitivity of the weak neutral current magnetic moments to violation of the Okubo-Zweig-Izuki rule is emphasized nand discussed. 39 refs.; 1 fig
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Directory of Open Access Journals (Sweden)
Chung-Min Wu
2013-01-01
Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.