An efficient flexible-order model for 3D nonlinear water waves
DEFF Research Database (Denmark)
Engsig-Karup, Allan Peter; Bingham, Harry B.; Lindberg, Ole
2009-01-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal......, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental...
An efficient flexible-order model for 3D nonlinear water waves
International Nuclear Information System (INIS)
Engsig-Karup, A.P.; Bingham, H.B.; Lindberg, O.
2009-01-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature
An efficient flexible-order model for 3D nonlinear water waves
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
An efficient flexible-order model for coastal and ocean water waves
DEFF Research Database (Denmark)
Engsig-Karup, Allan Peter; Bingham, Harry B.; Lindberg, Ole
Current work are directed toward the development of an improved numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included for flexibility in the description...... as in the original works \\cite{LiFleming1997,BinghamZhang2007}. The new and improved approach employs a GMRES solver with multigrid preconditioning to achieve optimal scaling of the overall solution effort, i.e., directly with $n$ the total number of grid points. A robust method is achieved through a special...
Toward a scalable flexible-order model for 3D nonlinear water waves
DEFF Research Database (Denmark)
Engsig-Karup, Allan Peter; Ducrozet, Guillaume; Bingham, Harry B.
For marine and coastal applications, current work are directed toward the development of a scalable numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included...... for flexibility in the description of structures by the use of curvilinear boundary-fitted meshes. The mathematical equations for potential waves in the physical domain is transformed through $\\sigma$-mapping(s) to a time-invariant boundary-fitted domain which then becomes a basis for an efficient solution...... strategy on a time-invariant mesh. The 3D numerical model is based on a finite difference method as in the original works \\cite{LiFleming1997,BinghamZhang2007}. Full details and other aspects of an improved 3D solution can be found in \\cite{EBL08}. The new and improved approach for three...
An efficiency correction model
Francke, M.K.; de Vos, A.F.
2009-01-01
We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
Efficiency model of Russian banks
Pavlyuk, Dmitry
2006-01-01
The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.
Modeling of venturi scrubber efficiency
Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.
The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.
Efficiency of economic development models
Directory of Open Access Journals (Sweden)
Oana Camelia Iacob
2013-03-01
Full Text Available The world economy is becoming increasingly integrated. Integrating emerging economies of Asia, such as China and India increase competition on the world stage, putting pressure on the "actors" already existing. These developments have raised questions about the effectiveness of European development model, which focuses on a high level of equity, insurance and social protection. According to analysts, the world today faces three models of economic development with significant weight in the world: the European, American and Asian. This study will focus on analyzing European development model, and a brief comparison with the United States. In addition, this study aims to highlight the relationship between efficiency and social equity that occurs in each submodel in part of the European model, given that social and economic performance in the EU are not homogeneous. To achieve this, it is necessary to analyze different indicators related to social equity and efficiency respectively, to observe the performance of each submodel individually. The article analyzes data to determine submodel performance according to social equity and economic efficiency.
Development of an Efficient GPU-Accelerated Model for Fully Nonlinear Water Waves
DEFF Research Database (Denmark)
of an optimized sequential single-CPU algorithm based on a flexible-order Finite Difference Method. High performance is pursued by utilizing many-core processing in the model focusing on GPUs for acceleration of code execution. This involves combining analytical methods with an algorithm redesign of the current...
Multidimensional Balanced Efficiency Decision Model
Directory of Open Access Journals (Sweden)
Antonella Petrillo
2015-10-01
Full Text Available In this paper a multicriteria methodological approach, based on Balanced Scorecard (BSC and Analytic Network Process (ANP, is proposed to evaluate competitiveness performance in luxury sector. A set of specific key performance indicators (KPIs have been proposed. The contribution of our paper is to present the integration of two methodologies, BSC – a multiple perspective framework for performance assessment – and ANP – a decision-making tool to prioritize multiple performance perspectives and indicators and to generate a unified metric that incorporates diversified issues for conducting supply chain improvements. The BSC/ANP model is used to prioritize all performances within a luxury industry. A real case study is presented.
BOREAS TE-17 Production Efficiency Model Images
National Aeronautics and Space Administration — A BOREAS version of the Global Production Efficiency Model(www.inform.umd.edu/glopem) was developed by TE-17 to generate maps of gross and net primary production,...
Statistical modelling for ship propulsion efficiency
DEFF Research Database (Denmark)
Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole
2012-01-01
This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...
Efficient Modelling and Generation of Markov Automata
Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the
Geometrical efficiency in computerized tomography: generalized model
International Nuclear Information System (INIS)
Costa, P.R.; Robilotta, C.C.
1992-01-01
A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)
Flight Test Maneuvers for Efficient Aerodynamic Modeling
Morelli, Eugene A.
2011-01-01
Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Modeling the Efficiency of a Germanium Detector
Hayton, Keith; Prewitt, Michelle; Quarles, C. A.
2006-10-01
We are using the Monte Carlo Program PENELOPE and the cylindrical geometry program PENCYL to develop a model of the detector efficiency of a planar Ge detector. The detector is used for x-ray measurements in an ongoing experiment to measure electron bremsstrahlung. While we are mainly interested in the efficiency up to 60 keV, the model ranges from 10.1 keV (below the Ge absorption edge at 11.1 keV) to 800 keV. Measurements of the detector efficiency have been made in a well-defined geometry with calibrated radioactive sources: Co-57, Se-75, Ba-133, Am-241 and Bi-207. The model is compared with the experimental measurements and is expected to provide a better interpolation formula for the detector efficiency than simply using x-ray absorption coefficients for the major constituents of the detector. Using PENELOPE, we will discuss several factors, such as Ge dead layer, surface ice layer and angular divergence of the source, that influence the efficiency of the detector.
Modeling international trends in energy efficiency
International Nuclear Information System (INIS)
Stern, David I.
2012-01-01
I use a stochastic production frontier to model energy efficiency trends in 85 countries over a 37-year period. Differences in energy efficiency across countries are modeled as a stochastic function of explanatory variables and I estimate the model using the cross-section of time-averaged data, so that no structure is imposed on technological change over time. Energy efficiency is measured using a new energy distance function approach. The country using the least energy per unit output, given its mix of outputs and inputs, defines the global production frontier. A country's relative energy efficiency is given by its distance from the frontier—the ratio of its actual energy use to the minimum required energy use, ceteris paribus. Energy efficiency is higher in countries with, inter alia, higher total factor productivity, undervalued currencies, and smaller fossil fuel reserves and it converges over time across countries. Globally, technological change was the most important factor counteracting the energy-use and carbon-emissions increasing effects of economic growth.
Modeling of alpha mass-efficiency curve
International Nuclear Information System (INIS)
Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.
2005-01-01
We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface
Efficient Iris Localization via Optimization Model
Directory of Open Access Journals (Sweden)
Qi Wang
2017-01-01
Full Text Available Iris localization is one of the most important processes in iris recognition. Because of different kinds of noises in iris image, the localization result may be wrong. Besides this, localization process is time-consuming. To solve these problems, this paper develops an efficient iris localization algorithm via optimization model. Firstly, the localization problem is modeled by an optimization model. Then SIFT feature is selected to represent the characteristic information of iris outer boundary and eyelid for localization. And SDM (Supervised Descent Method algorithm is employed to solve the final points of outer boundary and eyelids. Finally, IRLS (Iterative Reweighted Least-Square is used to obtain the parameters of outer boundary and upper and lower eyelids. Experimental result indicates that the proposed algorithm is efficient and effective.
ACO model should encourage efficient care delivery.
Toussaint, John; Krueger, David; Shortell, Stephen M; Milstein, Arnold; Cutler, David M
2015-09-01
The independent Office of the Actuary for CMS certified that the Pioneer ACO model has met the stringent criteria for expansion to a larger population. Significant savings have accrued and quality targets have been met, so the program as a whole appears to be working. Ironically, 13 of the initial 32 enrollees have left. We attribute this to the design of the ACO models which inadequately support efficient care delivery. Using Bellin-ThedaCare Healthcare Partners as an example, we will focus on correctible flaws in four core elements of the ACO payment model: finance spending and targets, attribution, and quality performance. Copyright © 2015 Elsevier Inc. All rights reserved.
Modelling water uptake efficiency of root systems
Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea
2016-04-01
Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow
Efficient Bayesian network modeling of systems
International Nuclear Information System (INIS)
Bensi, Michelle; Kiureghian, Armen Der; Straub, Daniel
2013-01-01
The Bayesian network (BN) is a convenient tool for probabilistic modeling of system performance, particularly when it is of interest to update the reliability of the system or its components in light of observed information. In this paper, BN structures for modeling the performance of systems that are defined in terms of their minimum link or cut sets are investigated. Standard BN structures that define the system node as a child of its constituent components or its minimum link/cut sets lead to converging structures, which are computationally disadvantageous and could severely hamper application of the BN to real systems. A systematic approach to defining an alternative formulation is developed that creates chain-like BN structures that are orders of magnitude more efficient, particularly in terms of computational memory demand. The formulation uses an integer optimization algorithm to identify the most efficient BN structure. Example applications demonstrate the proposed methodology and quantify the gained computational advantage
Models for efficient integration of solar energy
DEFF Research Database (Denmark)
Bacher, Peder
the available flexibility in the system. In the present thesis methods related to operation of solar energy systems and for optimal energy use in buildings are presented. Two approaches for forecasting of solar power based on numerical weather predictions (NWPs) are presented, they are applied to forecast......Efficient operation of energy systems with substantial amount of renewable energy production is becoming increasingly important. Renewables are dependent on the weather conditions and are therefore by nature volatile and uncontrollable, opposed to traditional energy production based on combustion....... The "smart grid" is a broad term for the technology for addressing the challenge of operating the grid with a large share of renewables. The "smart" part is formed by technologies, which models the properties of the systems and efficiently adapt the load to the volatile energy production, by using...
Efficient 3D scene modeling and mosaicing
Nicosevici, Tudor
2013-01-01
This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped. In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms. Also, towards dev...
Energy Efficiency Model for Induction Furnace
Dey, Asit Kr
2018-01-01
In this paper, a system of a solar induction furnace unit was design to find out a new solution for the existing AC power consuming heating process through Supervisory control and data acquisition system. This unit can be connected directly to the DC system without any internal conversion inside the device. The performance of the new system solution is compared with the existing one in terms of power consumption and losses. This work also investigated energy save, system improvement, process control model in a foundry induction furnace heating framework corresponding to PV solar power supply. The results are analysed for long run in terms of saving energy and integrated process system. The data acquisition system base solar foundry plant is an extremely multifaceted system that can be run over an almost innumerable range of operating conditions, each characterized by specific energy consumption. Determining ideal operating conditions is a key challenge that requires the involvement of the latest automation technologies, each one contributing to allow not only the acquisition, processing, storage, retrieval and visualization of data, but also the implementation of automatic control strategies that can expand the achievement envelope in terms of melting process, safety and energy efficiency.
Effective and efficient model clone detection
DEFF Research Database (Denmark)
Störrle, Harald
2015-01-01
Code clones are a major source of software defects. Thus, it is likely that model clones (i.e., duplicate fragments of models) have a significant negative impact on model quality, and thus, on any software created based on those models, irrespective of whether the software is generated fully...... automatically (“MDD-style”) or hand-crafted following the blueprint defined by the model (“MBSD-style”). Unfortunately, however, model clones are much less well studied than code clones. In this paper, we present a clone detection algorithm for UML domain models. Our approach covers a much greater variety...... of model types than existing approaches while providing high clone detection rates at high speed....
Energy technologies and energy efficiency in economic modelling
DEFF Research Database (Denmark)
Klinge Jacobsen, Henrik
1998-01-01
This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological ...... of renewable energy and especially wind power will increase the rate of efficiency improvement. A technologically based model in this case indirectly makes the energy efficiency endogenous in the aggregate energy-economy model....... technological development. This paper examines the effect on aggregate energy efficiency of using technological models to describe a number of specific technologies and of incorporating these models in an economic model. Different effects from the technology representation are illustrated. Vintage effects...... illustrates the dependence of average efficiencies and productivity on capacity utilisation rates. In the long run regulation induced by environmental policies are also very important for the improvement of aggregate energy efficiency in the energy supply sector. A Danish policy to increase the share...
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...
Information, complexity and efficiency: The automobile model
Energy Technology Data Exchange (ETDEWEB)
Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Business process model repositories : efficient process retrieval
Yan, Z.
2012-01-01
As organizations increasingly work in process-oriented manner, the number of business process models that they develop and have to maintain increases. As a consequence, it has become common for organizations to have collections of hundreds or even thousands of business process models. When a
Efficient querying of large process model repositories
Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie
2013-01-01
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business
MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY
Directory of Open Access Journals (Sweden)
V. K. Nedbalsky
2007-01-01
Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines.
Modelling household responses to energy efficiency interventions ...
African Journals Online (AJOL)
2010-11-01
Nov 1, 2010 ... to interventions aimed at reducing energy consumption (specifically the use of .... 4 A system dynamics model of electricity consumption ...... to base comparisons on overly detailed quantitative predictions of behaviour.
Maintaining formal models of living guidelines efficiently
Seyfang, Andreas; Martínez-Salvador, Begoña; Serban, Radu; Wittenberg, Jolanda; Miksch, Silvia; Marcos, Mar; Ten Teije, Annette; Rosenbrand, Kitty C J G M
2007-01-01
Translating clinical guidelines into formal models is beneficial in many ways, but expensive. The progress in medical knowledge requires clinical guidelines to be updated at relatively short intervals, leading to the term living guideline. This causes potentially expensive, frequent updates of the
AN EFFICIENT STRUCTURAL REANALYSIS MODEL FOR ...
African Journals Online (AJOL)
be required if complete and exact analysis would be carried out. This paper ... qualities even under significantly large design modifications. A numerical example has been presented to show potential capabilities of theproposed model. INTRODUCTION ... equilibrium conditions in the structural system and the subsequent ...
Efficient Modelling Methodology for Reconfigurable Underwater Robots
DEFF Research Database (Denmark)
Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid
2016-01-01
This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF). Th...
Efficient Turbulence Modeling for CFD Wake Simulations
DEFF Research Database (Denmark)
van der Laan, Paul
Wind turbine wakes can cause 10-20% annual energy losses in wind farms, and wake turbulence can decrease the lifetime of wind turbine blades. One way of estimating these effects is the use of computational fluid dynamics (CFD) to simulate wind turbines wakes in the atmospheric boundary layer. Since...... this flow is in the high Reynolds number regime, it is mainly dictated by turbulence. As a result, the turbulence modeling in CFD dominates the wake characteristics, especially in Reynolds-averaged Navier-Stokes (RANS). The present work is dedicated to study and develop RANS-based turbulence models...... verified with a grid dependency study. With respect to the standard k-ε EVM, the k-ε- fp EVM compares better with measurements of the velocity deficit, especially in the near wake, which translates to improved power deficits of the first wind turbines in a row. When the CFD metholody is applied to a large...
An Efficient Virtual Trachea Deformation Model
Directory of Open Access Journals (Sweden)
Cui Tong
2016-01-01
Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.
Efficient Matrix Models for Relational Learning
2009-10-01
base learners and h1:r is the ensemble learner. For example, consider the case where h1, . . . , hr are linear discriminants. The weighted vote of...a multilinear form naturally leads one to consider tensor factorization: e.g., UAV T is a special case of Tucker decomposition [129] on a 2D- tensor , a...matrix. Our five modeling choices can also be used to differentiate tensor factorizations, but the choices may be subtler for tensors than for
Exploiting partial knowledge for efficient model analysis
Macedo, Nuno; Cunha, Alcino; Pessoa, Eduardo José Dias
2017-01-01
The advancement of constraint solvers and model checkers has enabled the effective analysis of high-level formal specification languages. However, these typically handle a specification in an opaque manner, amalgamating all its constraints in a single monolithic verification task, which often proves to be a performance bottleneck. This paper addresses this issue by proposing a solving strategy that exploits user-provided partial knowledge, namely by assigning symbolic bounds to the problem’s ...
Model calibration for building energy efficiency simulation
International Nuclear Information System (INIS)
Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus
2014-01-01
Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases
Frontier models for evaluating environmental efficiency: an overview
Oude Lansink, A.G.J.M.; Wall, A.
2014-01-01
Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with
Urban eco-efficiency and system dynamics modelling
Energy Technology Data Exchange (ETDEWEB)
Hradil, P., Email: petr.hradil@vtt.fi
2012-06-15
Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)
Ozonolysis of Model Olefins-Efficiency of Antiozonants
Huntink, N.M.; Datta, Rabin; Talma, Auke; Noordermeer, Jacobus W.M.
2006-01-01
In this study, the efficiency of several potential long lasting antiozonants was studied by ozonolysis of model olefins. 2-Methyl-2-pentene was selected as a model for natural rubber (NR) and 5-phenyl-2-hexene as a model for styrene butadiene rubber (SBR). A comparison was made between the
The effectiveness and efficiency of model driven game design
Dormans, Joris
2012-01-01
In order for techniques from Model Driven Engineering to be accepted at large by the game industry, it is critical that the effectiveness and efficiency of these techniques are proven for game development. There is no lack of game design models, but there is no model that has surfaced as an industry
Directory of Open Access Journals (Sweden)
Michael J. Pelosi
2014-12-01
Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.
Evaluating Energy Efficiency Policies with Energy-Economy Models
Energy Technology Data Exchange (ETDEWEB)
Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.
2010-08-01
The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.
Modeling of Methods to Control Heat-Consumption Efficiency
Tsynaeva, E. A.; Tsynaeva, A. A.
2016-11-01
In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.
Structure model of energy efficiency indicators and applications
International Nuclear Information System (INIS)
Wu, Li-Ming; Chen, Bai-Sheng; Bor, Yun-Chang; Wu, Yin-Chin
2007-01-01
For the purposes of energy conservation and environmental protection, the government of Taiwan has instigated long-term policies to continuously encourage and assist industry in improving the efficiency of energy utilization. While multiple actions have led to practical energy saving to a limited extent, no strong evidence of improvement in energy efficiency was observed from the energy efficiency indicators (EEI) system, according to the annual national energy statistics and survey. A structural analysis of EEI is needed in order to understand the role that energy efficiency plays in the EEI system. This work uses the Taylor series expansion to develop a structure model for EEI at the level of the process sector of industry. The model is developed on the premise that the design parameters of the process are used in comparison with the operational parameters for energy differences. The utilization index of production capability and the variation index of energy utilization are formulated in the model to describe the differences between EEIs. Both qualitative and quantitative methods for the analysis of energy efficiency and energy savings are derived from the model. Through structural analysis, the model showed that, while the performance of EEI is proportional to the process utilization index of production capability, it is possible that energy may develop in a direction opposite to that of EEI. This helps to explain, at least in part, the inconsistency between EEI and energy savings. An energy-intensive steel plant in Taiwan was selected to show the application of the model. The energy utilization efficiency of the plant was evaluated and the amount of energy that had been saved or over-used in the production process was estimated. Some insights gained from the model outcomes are helpful to further enhance energy efficiency in the plant
Energy efficiency resource modeling in generation expansion planning
International Nuclear Information System (INIS)
Ghaderi, A.; Parsa Moghaddam, M.; Sheikh-El-Eslami, M.K.
2014-01-01
Energy efficiency plays an important role in mitigating energy security risks and emission problems. In this paper, energy efficiency resources are modeled as efficiency power plants (EPP) to evaluate their impacts on generation expansion planning (GEP). The supply curve of EPP is proposed using the production function of electricity consumption. A decision making framework is also presented to include EPP in GEP problem from an investor's point of view. The revenue of EPP investor is obtained from energy cost reduction of consumers and does not earn any income from electricity market. In each stage of GEP, a bi-level model for operation problem is suggested: the upper-level represents profit maximization of EPP investor and the lower-level corresponds to maximize the social welfare. To solve the bi-level problem, a fixed-point iteration algorithm is used known as diagonalization method. Energy efficiency feed-in tariff is investigated as a regulatory support scheme to encourage the investor. Results pertaining to a case study are simulated and discussed. - Highlights: • An economic model for energy efficiency programs is presented. • A framework is provided to model energy efficiency resources in GEP problem. • FIT is investigated as a regulatory support scheme to encourage the EPP investor. • The capacity expansion is delayed and reduced with considering EPP in GEP. • FIT-II can more effectively increase the energy saving compared to FIT-I
An efficient descriptor model for designing materials for solar cells
Alharbi, Fahhad H.; Rashkeev, Sergey N.; El-Mellouhi, Fedwa; Lüthi, Hans P.; Tabet, Nouar; Kais, Sabre
2015-11-01
An efficient descriptor model for fast screening of potential materials for solar cell applications is presented. It works for both excitonic and non-excitonic solar cells materials, and in addition to the energy gap it includes the absorption spectrum (α(E)) of the material. The charge transport properties of the explored materials are modelled using the characteristic diffusion length (Ld) determined for the respective family of compounds. The presented model surpasses the widely used Scharber model developed for bulk heterojunction solar cells. Using published experimental data, we show that the presented model is more accurate in predicting the achievable efficiencies. To model both excitonic and non-excitonic systems, two different sets of parameters are used to account for the different modes of operation. The analysis of the presented descriptor model clearly shows the benefit of including α(E) and Ld in view of improved screening results.
Neuro-fuzzy modelling of hydro unit efficiency
International Nuclear Information System (INIS)
Iliev, Atanas; Fushtikj, Vangel
2003-01-01
This paper presents neuro-fuzzy method for modeling of the hydro unit efficiency. The proposed method uses the characteristics of the fuzzy systems as universal function approximates, as well the abilities of the neural networks to adopt the parameters of the membership's functions and rules in the consequent part of the developed fuzzy system. Developed method is practically applied for modeling of the efficiency of unit which will be installed in the hydro power plant Kozjak. Comparison of the performance of the derived neuro-fuzzy method with several classical polynomials models is also performed. (Author)
Evaluation of discrete modeling efficiency of asynchronous electric machines
Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna
2011-01-01
In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.
Evaluating energy efficiency policies with energy-economy models
Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.
2010-01-01
The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically
Efficient Modelling and Generation of Markov Automata (extended version)
Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
2012-01-01
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the
Rigid-beam model of a high-efficiency magnicon
International Nuclear Information System (INIS)
Rees, D.E.; Tallerico, P.J.; Humphries, S.J. Jr.
1993-01-01
The magnicon is a new type of high-efficiency deflection-modulated amplifier developed at the Institute of Nuclear Physics in Novosibirsk, Russia. The prototype pulsed magnicon achieved an output power of 2.4 MW and an efficiency of 73% at 915 MHz. This paper presents the results of a rigid-beam model for a 700-MHz, 2.5-MW 82%-efficient magnicon. The rigid-beam model allows for characterization of the beam dynamics by tracking only a single electron. The magnicon design presented consists of a drive cavity; passive cavities; a pi-mode, coupled-deflection cavity; and an output cavity. It represents an optimized design. The model is fully self-consistent, and this paper presents the details of the model and calculated performance of a 2.5-MW magnicon
An Efficiency Model For Hydrogen Production In A Pressurized Electrolyzer
Energy Technology Data Exchange (ETDEWEB)
Smoglie, Cecilia; Lauretta, Ricardo
2010-09-15
The use of Hydrogen as clean fuel at a world wide scale requires the development of simple, safe and efficient production and storage technologies. In this work, a methodology is proposed to produce Hydrogen and Oxygen in a self pressurized electrolyzer connected to separate containers that store each of these gases. A mathematical model for Hydrogen production efficiency is proposed to evaluate how such efficiency is affected by parasitic currents in the electrolytic solution. Experimental set-up and results for an electrolyzer are also presented. Comparison of empirical and analytical results shows good agreement.
Efficient modeling of vector hysteresis using fuzzy inference systems
International Nuclear Information System (INIS)
Adly, A.A.; Abd-El-Hafiz, S.K.
2008-01-01
Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented
Modeling and energy efficiency optimization of belt conveyors
International Nuclear Information System (INIS)
Zhang, Shirong; Xia, Xiaohua
2011-01-01
Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.
Efficient family-based model checking via variability abstractions
DEFF Research Database (Denmark)
Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus
2016-01-01
with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...
Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model
Directory of Open Access Journals (Sweden)
Rory A. Roberts
2014-01-01
Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.
EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS
Directory of Open Access Journals (Sweden)
Péter Bihari
2010-01-01
Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the
Efficient Business Service Consumption by Customization with Variability Modelling
Directory of Open Access Journals (Sweden)
Michael Stollberg
2010-07-01
Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.
Energetics and efficiency of a molecular motor model
International Nuclear Information System (INIS)
Fogedby, Hans C; Svane, Axel
2013-01-01
The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al are analyzed from an analytical point of view. The model, which is based on protein friction with a track, is described by coupled Langevin equations for the motion in combination with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor are addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10% in a heuristic description of the motor to about 1 per mille when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action. (paper)
Efficient Adoption and Assessment of Multiple Process Improvement Reference Models
Directory of Open Access Journals (Sweden)
Simona Jeners
2013-06-01
Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.
Efficient Use of Preisach Hysteresis Model in Computer Aided Design
Directory of Open Access Journals (Sweden)
IONITA, V.
2013-05-01
Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.
Operator-based linearization for efficient modeling of geothermal processes
Khait, M.; Voskov, D.V.
2018-01-01
Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical issue. Geothermal reservoir modeling requires the solution of governing equations describing the conservation of mass and energy. The robust, accurate and computationally efficient implementation of ...
Energetics and efficiency of a molecular motor model
DEFF Research Database (Denmark)
C. Fogedby, Hans; Svane, Axel
2013-01-01
The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination...... when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action....
Efficient Parallel Statistical Model Checking of Biochemical Networks
Directory of Open Access Journals (Sweden)
Paolo Ballarini
2009-12-01
Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.
Crop modelling and water use efficiency of protected cucumber
International Nuclear Information System (INIS)
El Moujabber, M.; Atallah, Th.; Darwish, T.
2002-01-01
Crop modelling is considered an essential tool of planning. The automation of irrigation scheduling using crop models would contribute to an optimisation of water and fertiliser use of protected crops. To achieve this purpose, two experiments were carried. The first one aimed at determining water requirements and irrigation scheduling using climatic data. The second experiment was to establish the influence of irrigation interval and fertigation regime on water use efficiency. The results gave a simple model for the determination of the water requirements of protected cucumber by the use of climatic data: ETc=K* Ep. K and Ep are calculated using climatic data outside the greenhouse. As for water use efficiency, the second experiment highlighted the fact that a high frequency and continuous feeding are highly recommended for maximising yield. (author)
An efficient energy response model for liquid scintillator detectors
Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin
2018-05-01
Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.
Building Information Model: advantages, tools and adoption efficiency
Abakumov, R. G.; Naumov, A. E.
2018-03-01
The paper expands definition and essence of Building Information Modeling. It describes content and effects from application of Information Modeling at different stages of a real property item. Analysis of long-term and short-term advantages is given. The authors included an analytical review of Revit software package in comparison with Autodesk with respect to: features, advantages and disadvantages, cost and pay cutoff. A prognostic calculation is given for efficiency of adoption of the Building Information Modeling technology, with examples of its successful adoption in Russia and worldwide.
AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT
Directory of Open Access Journals (Sweden)
Kottalanka Srikanth
2017-07-01
Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.
An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhengwang Ye
2017-01-01
Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.
Increased Statistical Efficiency in a Lognormal Mean Model
Directory of Open Access Journals (Sweden)
Grant H. Skrepnek
2014-01-01
Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.
FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS
Directory of Open Access Journals (Sweden)
Benjamin Leonardo Alves White
2016-12-01
Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.
Efficient Neural Network Modeling for Flight and Space Dynamics Simulation
Directory of Open Access Journals (Sweden)
Ayman Hamdy Kassem
2011-01-01
Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.
Radially dependent photopeak efficiency model for Si(Li) detectors
Energy Technology Data Exchange (ETDEWEB)
Cohen, D D [Australian Inst. of Nuclear Science and Engineering, Lucas Heights
1980-12-15
A simple five parameter model for the efficiency of a Si(Li) detector has been developed. It was found necessary to include a radially dependent efficiency even for small detectors. The model is an extension of the pioneering work of Hansen et al. but correction factors include more up to date data and explicit equations for the mass attenuation coefficients over a wide range of photons energies. Four of the five parameters needed are generally supplied by most commercial manufacturers of Si(Li) detectors. /sup 54/Mn and /sup 241/Am sources have been used to calibrate a Si(Li) to approx. +-3% over the energy range 3-60 keV.
[Experimental evaluation of the spraying disinfection efficiency on dental models].
Zhang, Yi; Fu, Yuan-fei; Xu, Kan
2013-08-01
To evaluate the disinfect effect after spraying a new kind of disinfectant on the dental plaster models. The germ-free plaster samples, which were smeared with bacteria compound including Staphylococcus aureus, Escherichia coli, Saccharomyces albicans, Streptococcus mutans and Actinomyces viscosus were sprayed with disinfectants (CaviCide) and glutaraldehyde individually. In one group(5 minutes later) and another group(15 minutes later), the colonies were counted for statistical analysis after sampling, inoculating, and culturing which were used for evaluation of disinfecting efficiency. ANOVA was performed using SPSS12.0 software package. All sample bacteria were eradicated after spraying disinfectants(CaviCide) within 5 minutes and effective bacteria control was retained after 15 minutes. There was significant difference between the disinfecting efficiency of CaviCide and glutaraldehyde. The effect of disinfection with spraying disinfectants (CaviCide) on dental models is quick and effective.
Efficient image duplicated region detection model using sequential block clustering
Czech Academy of Sciences Publication Activity Database
Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak
2013-01-01
Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
Energy Technology Data Exchange (ETDEWEB)
Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.
Policy modeling for energy efficiency improvement in US industry
International Nuclear Information System (INIS)
Worrell, Ernst; Price, Lynn; Ruth, Michael
2001-01-01
We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy
Energy Technology Data Exchange (ETDEWEB)
Hampf, Benjamin
2011-08-15
In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.
Towards an efficient multiphysics model for nuclear reactor dynamics
Directory of Open Access Journals (Sweden)
Obaidurrahman K.
2015-01-01
Full Text Available Availability of fast computer resources nowadays has facilitated more in-depth modeling of complex engineering systems which involve strong multiphysics interactions. This multiphysics modeling is an important necessity in nuclear reactor safety studies where efforts are being made worldwide to combine the knowledge from all associated disciplines at one place to accomplish the most realistic simulation of involved phenomenon. On these lines coupled modeling of nuclear reactor neutron kinetics, fuel heat transfer and coolant transport is a regular practice nowadays for transient analysis of reactor core. However optimization between modeling accuracy and computational economy has always been a challenging task to ensure the adequate degree of reliability in such extensive numerical exercises. Complex reactor core modeling involves estimation of evolving 3-D core thermal state, which in turn demands an expensive multichannel based detailed core thermal hydraulics model. A novel approach of power weighted coupling between core neutronics and thermal hydraulics presented in this work aims to reduce the bulk of core thermal calculations in core dynamics modeling to a significant extent without compromising accuracy of computation. Coupled core model has been validated against a series of international benchmarks. Accuracy and computational efficiency of the proposed multiphysics model has been demonstrated by analyzing a reactivity initiated transient.
Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications
Directory of Open Access Journals (Sweden)
Peter Smolek
2018-06-01
Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.
Efficient estimation of feedback effects with application to climate models
International Nuclear Information System (INIS)
Cacugi, D.G.; Hall, M.C.G.
1984-01-01
This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical
Business Models, transparency and efficient stock price formation
DEFF Research Database (Denmark)
Nielsen, Christian; Vali, Edward; Hvidberg, Rene
has an impact on a company's price formation. In this respect, we analysed whether those companies that publish a lot of information that may support a business model description tend to have a more efficient price formation. Next, we turned to our sample of companies, and via interview-based case...... studies, we managed to draw conclusions on how to construct a comprehensible business model description. The business model explains how the company intends to compete in its market, and thus it gives an account of the characteristics that make the company unique. The business model constitutes...... the platform from which the company prepares and unfolds its strategy. In order to explain this platform and its particular qualities to external interested parties, the description must provide a clear and explicit account of the main determinants of the company's value creation and explain how...
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
Modeling of hybrid vehicle fuel economy and fuel engine efficiency
Wu, Wei
"Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.
Computationally efficient model predictive control algorithms a neural network approach
Ławryńczuk, Maciej
2014-01-01
This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: · A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. · Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. · The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). · The MPC algorithms with neural approximation with no on-line linearization. · The MPC algorithms with guaranteed stability and robustness. · Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...
Efficient solvers for coupled models in respiratory mechanics.
Verdugo, Francesc; Roth, Christian J; Yoshihara, Lena; Wall, Wolfgang A
2017-02-01
We present efficient preconditioners for one of the most physiologically relevant pulmonary models currently available. Our underlying motivation is to enable the efficient simulation of such a lung model on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments. The system of linear equations to be solved using the proposed preconditioners is essentially the monolithic system arising in fluid-structure interaction (FSI) extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. The numerical examples show that the resulting preconditioners approach the optimal performance of multigrid methods, even though the lung model is a complex multiphysics problem. Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformations is modeled with additional algebraic constraints. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Development of multicriteria models to classify energy efficiency alternatives
International Nuclear Information System (INIS)
Neves, Luis Pires; Antunes, Carlos Henggeler; Dias, Luis Candido; Martins, Antonio Gomes
2005-01-01
This paper aims at describing a novel constructive approach to develop decision support models to classify energy efficiency initiatives, including traditional Demand-Side Management and Market Transformation initiatives, overcoming the limitations and drawbacks of Cost-Benefit Analysis. A multicriteria approach based on the ELECTRE-TRI method is used, focusing on four perspectives: - an independent Agency with the aim of promoting energy efficiency; - Distribution-only utilities under a regulated framework; - the Regulator; - Supply companies in a competitive liberalized market. These perspectives were chosen after a system analysis of the decision situation regarding the implementation of energy efficiency initiatives, looking for the main roles and power relations, with the purpose of structuring the decision problem by identifying the actors, the decision makers, the decision paradigm, and the relevant criteria. The multicriteria models developed allow considering different kinds of impacts, but avoiding difficult measurements and unit conversions due to the nature of the multicriteria method chosen. The decision is then based on all the significant effects of the initiative, both positive and negative ones, including ancillary effects often forgotten in cost-benefit analysis. The ELECTRE-TRI, as most multicriteria methods, provides to the Decision Maker the ability of controlling the relevance each impact can have on the final decision. The decision support process encompasses a robustness analysis, which, together with a good documentation of the parameters supplied into the model, should support sound decisions. The models were tested with a set of real-world initiatives and compared with possible decisions based on Cost-Benefit analysis
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Mathematical modeling of efficient protocols to control glioma growth.
Branco, J R; Ferreira, J A; de Oliveira, Paula
2014-09-01
In this paper we propose a mathematical model to describe the evolution of glioma cells taking into account the viscoelastic properties of brain tissue. The mathematical model is established considering that the glioma cells are of two phenotypes: migratory and proliferative. The evolution of the migratory cells is described by a diffusion-reaction equation of non Fickian type deduced considering a mass conservation law with a non Fickian migratory mass flux. The evolution of the proliferative cells is described by a reaction equation. A stability analysis that leads to the design of efficient protocols is presented. Numerical simulations that illustrate the behavior of the mathematical model are included. Copyright © 2014 Elsevier Inc. All rights reserved.
Models for electricity market efficiency and bidding strategy analysis
Niu, Hui
This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have
Tool Efficiency Analysis model research in SEMI industry
Directory of Open Access Journals (Sweden)
Lei Ma
2018-01-01
Full Text Available One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states，and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.
An Efficient Null Model for Conformational Fluctuations in Proteins
DEFF Research Database (Denmark)
Harder, Tim Philipp; Borg, Mikael; Bottaro, Sandro
2012-01-01
Protein dynamics play a crucial role in function, catalytic activity, and pathogenesis. Consequently, there is great interest in computational methods that probe the conformational fluctuations of a protein. However, molecular dynamics simulations are computationally costly and therefore are often...... limited to comparatively short timescales. TYPHON is a probabilistic method to explore the conformational space of proteins under the guidance of a sophisticated probabilistic model of local structure and a given set of restraints that represent nonlocal interactions, such as hydrogen bonds or disulfide...... on conformational fluctuations that is in correspondence with experimental measurements. TYPHON provides a flexible, yet computationally efficient, method to explore possible conformational fluctuations in proteins....
Thermal Efficiency Degradation Diagnosis Method Using Regression Model
International Nuclear Information System (INIS)
Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol
2011-01-01
This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant
Efficient dynamic modeling of manipulators containing closed kinematic loops
Ferretti, Gianni; Rocco, Paolo
An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.
Contractual Efficiency of PPP Infrastructure Projects: An Incomplete Contract Model
Directory of Open Access Journals (Sweden)
Lei Shi
2018-01-01
Full Text Available This study analyses the contractual efficiency of public-private partnership (PPP infrastructure projects, with a focus on two financial aspects: the nonrecourse principal and incompleteness of debt contracts. The nonrecourse principal releases the sponsoring companies from the debt contract when the special purpose vehicle (SPV established by the sponsoring companies falls into default. Consequently, all obligations under the debt contract are limited to the liability of the SPV following its default. Because the debt contract is incomplete, a renegotiation of an additional loan between the bank and the SPV might occur to enable project continuation or liquidation, which in turn influences the SPV’s ex ante strategies (moral hazard. Considering these two financial features of PPP infrastructure projects, this study develops an incomplete contract model to investigate how the renegotiation triggers ex ante moral hazard and ex post inefficient liquidation. We derive equilibrium strategies under service fees endogenously determined via bidding and examine the effect of equilibrium strategies on contractual efficiency. Finally, we propose an optimal combination of a performance guarantee, the government’s termination right, and a service fee to improve the contractual efficiency of PPP infrastructure projects.
Modelling and analysis of solar cell efficiency distributions
Wasmer, Sven; Greulich, Johannes
2017-08-01
We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.
Wang, Hui
2014-01-01
This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth's subsurface requires
Efficiency Of Different Teaching Models In Teaching Of Frisbee Ultimate
Directory of Open Access Journals (Sweden)
Žuffová Zuzana
2015-05-01
Full Text Available The aim of the study was to verify the efficiency of two frisbee ultimate teaching models at 8-year grammar schools relative to age. In the experimental group was used a game based model (Teaching Games for Understanding and in the control group the traditional model based on teaching techniques. 6 groups of female students took part in experiment: experimental group 1 (n=10, age=11.6, experimental group 2 (n=12, age=13.8, experimental group 3 (n=14, age =15.8, control group 1 (n=11, age =11.7, control group 2 (n=10, age =13.8 and control group 3 (n=9, age =15.8. Efficiency of the teaching models was evaluated based of game performance and special knowledge results. Game performance was evaluated by the method of game performance assessment based on GPAI (Game Performance Assessment Instrument through video record. To verify level of knowledge, we used a knowledge test, which consisted of questions related to the rules and tactics knowledge of frisbee ultimate. To perform statistical evaluation Mann-Whitney U-test was used. Game performance assessment and knowledge level indicated higher efficiency of TGfU in general, but mostly statistically insignificant. Experimental groups 1 and 2 were significantly better in the indicator that evaluates tactical aspect of game performance - decision making (p<0.05. Experimental group 3 was better in the indicator that evaluates skill execution - disc catching. The results showed that the students of the classes taught by game based model reached partially better game performance in general. Experimental groups achieved from 79.17 % to 80 % of correct answers relating to the rules and from 75 % to 87.5 % of correct answers relating to the tactical knowledge in the knowledge test. Control groups achieved from 57.69 % to 72.22 % of correct answers relating to the rules and from 51.92 % to 72.22 % of correct answers relating to the tactical knowledge in the knowledge test.
A model for efficient management of electrical assets
International Nuclear Information System (INIS)
Alonso Guerreiro, A.
2008-01-01
At the same time that energy demand grows faster than the investments in electrical installations, the older capacity is reaching the end of its useful life. The need of running all those capacity without interruptions and an efficient maintenance of its assets, are the two current key points for power generation, transmission and distribution systems. This paper tries to show the reader a model of management which makes possible an effective management of assets with a strict control cost, and which includes those key points, centred at predictive techniques, involving all the departments of the organization and which goes further on considering the maintenance like a simple reparation or substitution of broken down units. Therefore, it becomes precise a model with three basic lines: supply guarantee, quality service and competitively, in order to allow the companies to reach the current demands which characterize the power supply. (Author) 5 refs
Nanotoxicity modelling and removal efficiencies of ZnONP.
Fikirdeşici Ergen, Şeyda; Üçüncü Tunca, Esra
2018-01-02
In this paper the aim is to investigate the toxic effect of zinc oxide nanoparticles (ZnONPs) and is to analyze the removal of ZnONP in aqueous medium by the consortium consisted of Daphnia magna and Lemna minor. Three separate test groups are formed: L. minor ([Formula: see text]), D. magna ([Formula: see text]), and L. minor + D. magna ([Formula: see text]) and all these test groups are exposed to three different nanoparticle concentrations ([Formula: see text]). Time-dependent, concentration-dependent, and group-dependent removal efficiencies are statistically compared by non-parametric Mann-Whitney U test and statistically significant differences are observed. The optimum removal values are observed at the highest concentration [Formula: see text] for [Formula: see text], [Formula: see text] for [Formula: see text]and [Formula: see text] for [Formula: see text] and realized at [Formula: see text] for all test groups [Formula: see text]. There is no statistically significant differences in removal at low concentrations [Formula: see text] in terms of groups but [Formula: see text] test groups are more efficient than [Formula: see text] test groups in removal of ZnONP, at [Formula: see text] concentration. Regression analysis is also performed for all prediction models. Different models are tested and it is seen that cubic models show the highest predicted values (R 2 ). In toxicity models, R 2 values are obtained at (0.892, 0.997) interval. A simple solution-phase method is used to synthesize ZnO nanoparticles. Dynamic Light Scattering and X-Ray Diffraction (XRD) are used to detect the particle size of synthesized ZnO nanoparticles.
Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.
Directory of Open Access Journals (Sweden)
Zhiwen Yu
Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.
An efficient and simplified model for forecasting using SRM
International Nuclear Information System (INIS)
Asif, H.M.; Hyat, M.F.; Ahmad, T.
2014-01-01
Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines), provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory), SRM (Structural Risk Minimization )Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression) for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building. (author)
Electrodynamical Model of Quasi-Efficient Financial Markets
Ilinski, Kirill N.; Stepanenko, Alexander S.
The modelling of financial markets presents a problem which is both theoretically challenging and practically important. The theoretical aspects concern the issue of market efficiency which may even have political implications [1], whilst the practical side of the problem has clear relevance to portfolio management [2] and derivative pricing [3]. Up till now all market models contain "smart money" traders and "noise" traders whose joint activity constitutes the market [4, 5]. On a short time scale this traditional separation does not seem to be realistic, and is hardly acceptable since all high-frequency market participants are professional traders and cannot be separated into "smart" and "noisy." In this paper we present a "microscopic" model with homogenuous quasi-rational behaviour of traders, aiming to describe short time market behaviour. To construct the model we use an analogy between "screening" in quantum electrodynamics and an equilibration process in a market with temporal mispricing [6, 7]. As a result, we obtain the time-dependent distribution function of the returns which is in quantitative agreement with real market data and obeys the anomalous scaling relations recently reported for both high-frequency exchange rates [8], S&P500 [9] and other stock market indices [10, 11].
An Efficient and Simplified Model for Forecasting using SRM
Directory of Open Access Journals (Sweden)
Hafiz Muhammad Shahzad Asif
2014-01-01
Full Text Available Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines, provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory, SRM (Structural Risk Minimization Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building
Modeling adaptation of carbon use efficiency in microbial communities
Directory of Open Access Journals (Sweden)
Steven D Allison
2014-10-01
Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.
Integer Representations towards Efficient Counting in the Bit Probe Model
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet
2011-01-01
Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...
Building an Efficient Model for Afterburn Energy Release
Energy Technology Data Exchange (ETDEWEB)
Alves, S; Kuhl, A; Najjar, F; Tringe, J; McMichael, L; Glascoe, L
2012-02-03
Many explosives will release additional energy after detonation as the detonation products mix with the ambient environment. This additional energy release, referred to as afterburn, is due to combustion of undetonated fuel with ambient oxygen. While the detonation energy release occurs on a time scale of microseconds, the afterburn energy release occurs on a time scale of milliseconds with a potentially varying energy release rate depending upon the local temperature and pressure. This afterburn energy release is not accounted for in typical equations of state, such as the Jones-Wilkins-Lee (JWL) model, used for modeling the detonation of explosives. Here we construct a straightforward and efficient approach, based on experiments and theory, to account for this additional energy release in a way that is tractable for large finite element fluid-structure problems. Barometric calorimeter experiments have been executed in both nitrogen and air environments to investigate the characteristics of afterburn for C-4 and other materials. These tests, which provide pressure time histories, along with theoretical and analytical solutions provide an engineering basis for modeling afterburn with numerical hydrocodes. It is toward this end that we have constructed a modified JWL equation of state to account for afterburn effects on the response of structures to blast. The modified equation of state includes a two phase afterburn energy release to represent variations in the energy release rate and an afterburn energy cutoff to account for partial reaction of the undetonated fuel.
Efficiency assessment models of higher education institution staff activity
Directory of Open Access Journals (Sweden)
K. A. Dyusekeyev
2016-01-01
Full Text Available The paper substantiates the necessity of improvement of university staff incentive system under the conditions of competition in the field of higher education, the necessity to develop a separate model for the evaluation of the effectiveness of the department heads. The authors analysed the methods for assessing production function of units. The advantage of the application of the methods to assess the effectiveness of border economic structures in the field of higher education is shown. The choice of the data envelopment analysis method to solve the problem has proved. The model for evaluating of university departments activity on the basis of the DEAmethodology has developed. On the basis of operating in Russia, Kazakhstan and other countries universities staff pay systems the structure of the criteria system for university staff activity evaluation has been designed. For clarification and specification of the departments activity efficiency criteria a strategic map has been developed that allowed us to determine the input and output parameters of the model. DEA-methodology using takes into account a large number of input and output parameters, increases the assessment objectivity by excluding experts, receives interim data to identify the strengths and weaknesses of the evaluated object.
Efficient transfer of sensitivity information in multi-component models
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.; Rabiti, Cristian
2011-01-01
In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)
An efficient method for model refinement in diffuse optical tomography
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
Efficient modeling for pulsed activation in inertial fusion energy reactors
International Nuclear Information System (INIS)
Sanz, J.; Yuste, P.; Reyes, S.; Latkowski, J.F.
2000-01-01
First structural wall material (FSW) materials in inertial fusion energy (IFE) power reactors will be irradiated under typical repetition rates of 1-10 Hz, for an operation time as long as the total reactor lifetime. The main objective of the present work is to determine whether a continuous-pulsed (CP) approach can be an efficient method in modeling the pulsed activation process for operating conditions of FSW materials. The accuracy and practicability of this method was investigated both analytically and (for reaction/decay chains of two and three nuclides) by computational simulation. It was found that CP modeling is an accurate and practical method for calculating the neutron-activation of FSW materials. Its use is recommended instead of the equivalent steady-state method or the exact pulsed modeling. Moreover, the applicability of this method to components of an IFE power plant subject to repetition rates lower than those of the FSW is still being studied. The analytical investigation was performed for 0.05 Hz, which could be typical for the coolant. Conclusions seem to be similar to those obtained for the FSW. However, further future work is needed for a final answer
Efficient algorithms for multiscale modeling in porous media
Wheeler, Mary F.; Wildey, Tim; Xue, Guangri
2010-01-01
We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.
Efficient algorithms for multiscale modeling in porous media
Wheeler, Mary F.
2010-09-26
We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.
A model to improve efficiency and effectiveness of safeguards measures
International Nuclear Information System (INIS)
D'Amato, Eduardo; Llacer, Carlos; Vicens, Hugo
2001-01-01
Full text: The main purpose of our current studies is to analyse the measures to be adopted tending to integrate the traditional safeguard measures to the ones stated in the Additional Protocol (AP). A simplified nuclear fuel cycle model is considered to draw some conclusions on the application of integrated safeguard measures. This paper includes a briefing, describing the historical review that gave birth to the A.P. and proposes a model to help the control bodies in the making decision process. In May 1997, the Board of Governors approved the Model Additional Protocol (MAP) which aimed at strengthening the effectiveness and improving the efficiency of safeguard measures. For States under a comprehensive safeguard agreement the measures adopted provide credible assurance on the absence of undeclared nuclear material and activities. In September 1999, the governments of Argentina and Brazil formally announced in the Board of Governors that both countries would start preliminary consultations on one adapted MAP applied to the Agreement between the Republic of Argentina, the Federative Republic of Brazil, the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials and the International Atomic Energy Agency for the Application of Safeguards (Quatripartite Agreement/INFCIRC 435). In December 1999, a first draft of the above mentioned document was provided as a starting point of discussion. During the year 2000 some modifications to the original draft took place. These were the initial steps in the process aiming at reaching the adequate conditions to adhere to the A.P. in each country in a future Having in mind the future AP implementation, the safeguards officers of the Regulatory Body of Argentina (ARN) began to think about the future simultaneous application of the two types of safeguards measures, the traditional and the non traditional ones, what should converge in an integrated system. By traditional safeguards it is understood quantitative
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Computationally efficient models of neuromuscular recruitment and mechanics.
Song, D; Raphael, G; Lan, N; Loeb, G E
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Computationally efficient models of neuromuscular recruitment and mechanics
Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.
2008-06-01
We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.
Replaceable Substructures for Efficient Part-Based Modeling
Liu, Han; Vimont, Ulysse; Wand, Michael; Cani, Marie Paule; Hahmann, Stefanie; Rohmer, Damien; Mitra, Niloy J.
2015-01-01
A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Replaceable Substructures for Efficient Part-Based Modeling
Liu, Han
2015-05-01
A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
THE MODEL FOR POWER EFFICIENCY ASSESSMENT OF CONDENSATION HEATING INSTALLATIONS
Directory of Open Access Journals (Sweden)
D. Kovalchuk
2017-11-01
Full Text Available The main part of heating systems and domestic hot water systems are based on the natural gas boilers. Forincreasing the overall performance of such heating system the condensation gas boilers was developed and are used. Howevereven such type of boilers don't use all energy which is released from a fuel combustion. The main factors influencing thelowering of overall performance of condensation gas boilers in case of operation in real conditions are considered. Thestructure of the developed mathematical model allowing estimating the overall performance of condensation gas boilers(CGB in the conditions of real operation is considered. Performace evaluation computer experiments of such CGB during aheating season for real weather conditions of two regions of Ukraine was made. Graphic dependences of temperatureconditions and heating system effectiveness change throughout a heating season are given. It was proved that normal CGBdoes not completely use all calorific value of fuel, thus, it isn't effective. It was also proved that the efficiency of such boilerssignificantly changes during a heating season depending on weather conditions and doesn't reach the greatest possible value.The possibility of increasing the efficiency of CGB due to hydraulic division of heating and condensation sections and use ofthe vapor-compression heat pump for deeper cooling of combustion gases and removing of the highest possible amount ofthermal energy from them are considered. The scheme of heat pump connection to the heating system with a convenient gasboiler and the separate condensation economizer allowing to cool combustion gases deeply below a dew point and to warm upthe return heat carrier before a boiler input is provided. The technological diagram of the year-round use of the heat pump forhot water heating after the end of heating season, without gas use is offered.
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...
A framework for fuzzy model of thermoradiotherapy efficiency
International Nuclear Information System (INIS)
Kosterev, V.V.; Averkin, A.N.
2005-01-01
Full text: The use of hyperthermia as an adjuvant to radiation in the treatment of local and regional disease currently offers the most significant advantages. For processing of information of thermo radiotherapy efficiency, it is expedient to use the fuzzy logic based decision-support system - fuzzy system (FS). FSs are widely used in various application areas of control and decision making. Their popularity is due to the following reasons. Firstly, FS with triangular membership functions is universal approximator. Secondly, the designing of FS does not need the exact model of the process, but needs only qualitative linguistic dependences between the parameters. Thirdly, there are many program and hardware realizations of FS with very high speed of calculations. Fourthly, accuracy of the decisions received based on FS, usually is not worse and sometimes is better than accuracy of the decisions received by traditional methods. Moreover, dependence between input and output variables can be easily expressed in linguistic scales. The goal of this research is to choose the data fusion RULE's operators suitable to experimental results and taking into consideration uncertainty factor. Methods of aggregation and data fusion might be used which provide a methodology to extract comprehensible rules from data. Several data fusion algorithms have been developed and applied, individually and in combination, providing users with various levels of informational detail. In reviewing these emerging technology three basic categories (levels) of data fusion has been developed. These fusion levels are differentiated according to the amount of information they provide. Refs. 2 (author)
An efficient and effective teaching model for ambulatory education.
Regan-Smith, Martha; Young, William W; Keller, Adam M
2002-07-01
Teaching and learning in the ambulatory setting have been described as inefficient, variable, and unpredictable. A model of ambulatory teaching that was piloted in three settings (1973-1981 in a university-affiliated outpatient clinic in Portland, Oregon, 1996-2000 in a community outpatient clinic, and 2000-2001 in an outpatient clinic serving Dartmouth Medical School's teaching hospital) that combines a system of education and a system of patient care is presented. Fully integrating learners into the office practice using creative scheduling, pre-rotation learning, and learner competence certification enabled the learners to provide care in roles traditionally fulfilled by physicians and nurses. Practice redesign made learners active members of the patient care team by involving them in such tasks as patient intake, histories and physicals, patient education, and monitoring of patient progress between visits. So that learners can be active members of the patient care team on the first day of clinic, pre-training is provided by the clerkship or residency so that they are able to competently provide care in the time available. To assure effective education, teaching and learning times are explicitly scheduled by parallel booking of patients for the learner and the preceptor at the same time. In the pilot settings this teaching model maintained or improved preceptor productivity and on-time efficiency compared with these outcomes of traditional scheduling. The time spent alone with patients, in direct observation by preceptors, and for scheduled case discussion was appreciated by learners. Increased satisfaction was enjoyed by learners, teachers, clinic staff, and patients. Barriers to implementation include too few examining rooms, inability to manipulate patient appointment schedules, and learners' not being present in a teaching clinic all the time.
Partial-factor Energy Efficiency Model of Indonesia
Nugroho Fathul; Syaifudin Noor
2018-01-01
This study employs the partial-factor energy efficiency to reveal the relationships between energy efficiency and the consumption of both, the renewable energy and non-renewable energy in Indonesia. The findings confirm that consumption of non-renewable energy will increase the inefficiency in energy consumption. On the other side, the use of renewable energy will increase the energy efficiency in Indonesia. As the result, the Government of Indonesia may address this issue by providing more s...
Ma, Xiaojun; Wang, Changxin; Yu, Yuanbo; Li, Yudong; Dong, Biying; Zhang, Xinyu; Niu, Xueqi; Yang, Qian; Chen, Ruimin; Li, Yifan; Gu, Yihan
2018-05-15
Ecological problem is one of the core issues that restrain China's economic development at present, and it is urgently needed to be solved properly and effectively. Based on panel data from 30 regions, this paper uses a super efficiency slack-based measure (SBM) model that introduces the undesirable output to calculate the ecological efficiency, and then uses traditional and metafrontier-Malmquist index method to study regional change trends and technology gap ratios (TGRs). Finally, the Tobit regression and principal component analysis methods are used to analysis the main factors affecting eco-efficiency and impact degree. The results show that about 60% of China's provinces have effective eco-efficiency, and the overall ecological efficiency of China is at the superior middling level, but there is a serious imbalance among different provinces and regions. Ecological efficiency has an obvious spatial cluster effect. There are differences among regional TGR values. Most regions show a downward trend and the phenomenon of focusing on economic development at the expense of ecological protection still exists. Expansion of opening to the outside, increases in R&D spending, and improvement of population urbanization rate have positive effects on eco-efficiency. Blind economic expansion, increases of industrial structure, and proportion of energy consumption have negative effects on eco-efficiency.
Model-based and model-free “plug-and-play” building energy efficient control
International Nuclear Information System (INIS)
Baldi, Simone; Michailidis, Iakovos; Ravanis, Christos; Kosmatopoulos, Elias B.
2015-01-01
Highlights: • “Plug-and-play” Building Optimization and Control (BOC) driven by building data. • Ability to handle the large-scale and complex nature of the BOC problem. • Adaptation to learn the optimal BOC policy when no building model is available. • Comparisons with rule-based and advanced BOC strategies. • Simulation and real-life experiments in a ten-office building. - Abstract: Considerable research efforts in Building Optimization and Control (BOC) have been directed toward the development of “plug-and-play” BOC systems that can achieve energy efficiency without compromising thermal comfort and without the need of qualified personnel engaged in a tedious and time-consuming manual fine-tuning phase. In this paper, we report on how a recently introduced Parametrized Cognitive Adaptive Optimization – abbreviated as PCAO – can be used toward the design of both model-based and model-free “plug-and-play” BOC systems, with minimum human effort required to accomplish the design. In the model-based case, PCAO assesses the performance of its control strategy via a simulation model of the building dynamics; in the model-free case, PCAO optimizes its control strategy without relying on any model of the building dynamics. Extensive simulation and real-life experiments performed on a 10-office building demonstrate the effectiveness of the PCAO–BOC system in providing significant energy efficiency and improved thermal comfort. The mechanisms embedded within PCAO render it capable of automatically and quickly learning an efficient BOC strategy either in the presence of complex nonlinear simulation models of the building dynamics (model-based) or when no model for the building dynamics is available (model-free). Comparative studies with alternative state-of-the-art BOC systems show the effectiveness of the PCAO–BOC solution
Modeling technical efficiency of inshore fishery using data envelopment analysis
Rahman, Rahayu; Zahid, Zalina; Khairi, Siti Shaliza Mohd; Hussin, Siti Aida Sheikh
2016-10-01
Fishery industry contributes significantly to the economy of Malaysia. This study utilized Data Envelopment Analysis application in estimating the technical efficiency of fishery in Terengganu, a state on the eastern coast of Peninsular Malaysia, based on multiple output, i.e. total fish landing and income of fishermen with six inputs, i.e. engine power, vessel size, number of trips, number of workers, cost and operation distance. The data were collected by survey conducted between November and December 2014. The decision making units (DMUs) involved 100 fishermen from 10 fishery areas. The result showed that the technical efficiency in Season I (dry season) and Season II (rainy season) were 90.2% and 66.7% respectively. About 27% of the fishermen were rated to be efficient during Season I, meanwhile only 13% of the fishermen achieved full efficiency 100% during Season II. The results also found out that there was a significance difference in the efficiency performance between the fishery areas.
Modeling of detective quantum efficiency considering scatter-reduction devices
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.
Receipts Assay Monitor: deadtime correction model and efficiency profile
International Nuclear Information System (INIS)
Weingardt, J.J.; Stewart, J.E.
1986-08-01
Experiments were performed at Los Alamos National Laboratory to characterize the operating parameters and flatten the axial efficiency profile of a neutron coincidence counter called the Receipts Assay Monitor (RAM). Optimum electronic settings determined by conventional methods included operating voltage (1680 V) and gate width (64 μs). Also determined were electronic characteristics such as bias and deadtime. Neutronic characteristics determined using a 252 Cf neutron source included axial efficiency profiles and axial die-away time profiles. The RAM electronics showed virtually no bias for coincidence count rate; it was measured as -4.6 x 10 -5 % with a standard deviation of 3.3 x 10 -4 %. Electronic deadtime was measured by two methods. The first method expresses the coincidence-rate deadtime as a linear function of the measured totals rate, and the second method treats deadtime as a constant. Initially, axial coincidence efficiency profiles yielded normalized efficiencies at the bottom and top of a 17-in. mockup UF 6 sample of 68.9% and 40.4%, respectively, with an average relative efficiency across the sample of 86.1%. Because the nature of the measurements performed with the RAM favors a much flatter efficiency profile, 3-mil cadmium sheets were wrapped around the 3 He tubes in selected locations to flatten the efficiency profile. Use of the cadmium sheets resulted in relative coincidence efficiencies at the bottom and top of the sample of 82.3% and 57.4%, respectively, with an average relative efficiency of 93.5%
Optimizing lengths of confidence intervals: fourth-order efficiency in location models
Klaassen, C.; Venetiaan, S.
2010-01-01
Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation
Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.
Directory of Open Access Journals (Sweden)
Nikola Simidjievski
Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.
A resource allocation model to support efficient air quality ...
African Journals Online (AJOL)
Efficient implementation of policies and strategies require that ... †Graduate School of Business Leadership, University of South Africa, P.O. Box 392, Pretoria, 0003, .... and source, emissions, air quality and meteorological data reporting.
Efficient anisotropic wavefield extrapolation using effective isotropic models
Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad
2013-01-01
Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented
Computer-aided modeling framework for efficient model development, analysis and identification
DEFF Research Database (Denmark)
Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio
2011-01-01
Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...
A physiological foundation for the nutrition-based efficiency wage model
DEFF Research Database (Denmark)
Dalgaard, Carl-Johan Lars; Strulik, Holger
2011-01-01
Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model...
Efficient modeling of chiral media using SCN-TLM method
Directory of Open Access Journals (Sweden)
Yaich M.I.
2004-01-01
Full Text Available An efficient approach allowing to include linear bi-isotropic chiral materials in time-domain transmission line matrix (TLM calculations by employing recursive evaluation of the convolution of the electric and magnetic fields and susceptibility functions is presented. The new technique consists to add both voltage and current sources in supplementary stubs of the symmetrical condensed node (SCN of the TLM method. In this article, the details and the complete description of this approach are given. A comparison of the obtained numerical results with those of the literature reflects its validity and efficiency.
Validated biomechanical model for efficiency and speed of rowing.
Pelz, Peter F; Vergé, Angela
2014-10-17
The speed of a competitive rowing crew depends on the number of crew members, their body mass, sex and the type of rowing-sweep rowing or sculling. The time-averaged speed is proportional to the rower's body mass to the 1/36th power, to the number of crew members to the 1/9th power and to the physiological efficiency (accounted for by the rower's sex) to the 1/3rd power. The quality of the rowing shell and propulsion system is captured by one dimensionless parameter that takes the mechanical efficiency, the shape and drag coefficient of the shell and the Froude propulsion efficiency into account. We derive the biomechanical equation for the speed of rowing by two independent methods and further validate it by successfully predicting race times. We derive the theoretical upper limit of the Froude propulsion efficiency for low viscous flows. This upper limit is shown to be a function solely of the velocity ratio of blade to boat speed (i.e., it is completely independent of the blade shape), a result that may also be of interest for other repetitive propulsion systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling the irradiance dependency of the quantum efficiency of potosynthesis
Silsbe, G.M.; Kromkamp, J.C.
2012-01-01
Measures of the quantum efficiency of photosynthesis (phi(PSII)) across an irradiance (E) gradient are an increasingly common physiological assay and alternative to traditional photosynthetic-irradiance (PE) assays. Routinely, the analysis and interpretation of these data are analogous to PE
Modeling Vertical Flow Treatment Wetland Hydraulics to Optimize Treatment Efficiency
2011-03-24
be forced to flow in a 90 serpentine manner back and forth as it moves upward through the wetland (think waiting in line at Disneyland ). This...Flow Treatment Wetland Hydraulics to Optimize Treatment Efficiency 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR
Modeling efficient resource allocation patterns for arable crop ...
African Journals Online (AJOL)
optimum plans. This should be complemented with strong financial support, farm advisory services and adequate supply of modern inputs at fairly competitive prices would enhance the prospects of the small holder farmers. Keywords: efficient, resource allocation, optimization, linear programming, gross margin ...
A resource allocation model to support efficient air quality ...
African Journals Online (AJOL)
Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz.
Efficient ECG Signal Compression Using Adaptive Heart Model
National Research Council Canada - National Science Library
Szilagyi, S
2001-01-01
This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...
ESTIMATION OF EFFICIENCY OF THE COMPETITIVE COOPERATION MODEL
Directory of Open Access Journals (Sweden)
Natalia N. Liparteliani
2014-01-01
Full Text Available Competitive cooperation model of regional travel agencies and travel market participants is considered. Evaluation of the model using mathematical and statistical methods was carried out. Relationship marketing provides a travel company certain economic advantages.
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
Efficient Modelling, Generation and Analysis of Markov Automata
Timmer, Mark
2013-01-01
Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of the models involved allow more types of systems to be analysed, but also raise the difficulty of their
Robust and efficient solution procedures for association models
DEFF Research Database (Denmark)
Michelsen, Michael Locht
2006-01-01
Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....
Directory of Open Access Journals (Sweden)
Jianhuan Huang
2014-01-01
Full Text Available There are two typical subprocesses in bank production—deposit generation and loan generation. Aiming to open the black box of input-output production of banks and provide comprehensive and accurate assessment on the efficiency of each stage, this paper proposes a two-stage network model with bad outputs and supper efficiency (US-NSBM. Empirical comparisons show that the US-NSBM may be promising and practical for taking the nonperforming loans into account and being able to rank all samples. Applying it to measure the efficiency of Chinese commercial banks from 2008 to 2012, this paper explores the characteristics of overall and divisional efficiency, as well as the determinants of them. Some interesting results are discovered. The polarization of efficiency occurs in the bank level and deposit generation, yet does not in the loan generation. Five hypotheses work as expected in the bank level, but not all of them are supported in the stage level. Our results extend and complement some earlier empirical publications in the bank level.
Improved quantum efficiency models of CZTSe: GE nanolayer solar cells with a linear electric field.
Lee, Sanghyun; Price, Kent J; Saucedo, Edgardo; Giraldo, Sergio
2018-02-08
We fabricated and characterized CZTSe:Ge nanolayer (quantum efficiency for Ge doped CZTSe devices. The linear electric field model is developed with the incomplete gamma function of the quantum efficiency as compared to the empirical data at forward bias conditions. This model is characterized with a consistent set of parameters from a series of measurements and the literature. Using the analytical modelling method, the carrier collection profile in the absorber is calculated and closely fitted by the developed mathematical expressions to identify the carrier dynamics during the quantum efficiency measurement of the device. The analytical calculation is compared with the measured quantum efficiency data at various bias conditions.
Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.
2014-01-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based
Wang, Hui
2014-05-01
This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth\\'s subsurface requires modeling and migration as building blocks. The challenge comes from two aspects. First, the underlying governing equations for seismic wave propagation in anisotropic media are far more complicated than that in isotropic media which demand higher computational costs to solve. Second, the usage of whole prestack seismic data still remains a burden considering its storage volume and the existing wave equation solvers. In this thesis, I develop two approaches to tackle the challenges. In the first part, I adopt the concept of prestack exploding reflector model to handle the whole prestack data and bridge the data space directly to image space in a single kernel. I formulate the extrapolation operator in a two-way fashion to remove he restriction on directions that waves propagate. I also develop a generic method for phase velocity evaluation within anisotropic media used in this extrapolation kernel. The proposed method provides a tool for generating prestack images without wavefield cross correlations. In the second part of this thesis, I approximate the anisotropic models using effective isotropic models. The wave phenomena in these effective models match that in anisotropic models both kinematically and dynamically. I obtain the effective models through equating eikonal equations and transport equations of anisotropic and isotropic models, thereby in the high frequency asymptotic approximation sense. The wavefields extrapolation costs are thus reduced using isotropic wave equation solvers while the anisotropic effects are maintained through this approach. I benchmark the two proposed methods using synthetic datasets. Tests on anisotropic Marmousi model and anisotropic BP2007 model demonstrate the applicability of my
Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling
DEFF Research Database (Denmark)
Albaek, Mads O.; Gernaey, Krist V.; Hansen, Morten S.
2012-01-01
Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of ce...... was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency.......Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression...... of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model...
Mathematical modelling as basis for efficient enterprise management
Directory of Open Access Journals (Sweden)
Kalmykova Svetlana
2017-01-01
Full Text Available The choice of the most effective HR- management style at the enterprise is based on modeling various socio-economic situations. The article describes the formalization of the managing processes aimed at the interaction between the allocated management subsystems. The mathematical modelling tools are used to determine the time spent on recruiting personnel for key positions in the management hierarchy selection.
Tests of control in the Audit Risk Model : Effective? Efficient?
Blokdijk, J.H. (Hans)
2004-01-01
Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes
Modeling Large Time Series for Efficient Approximate Query Processing
DEFF Research Database (Denmark)
Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang
2015-01-01
query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our...
Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation
Energy Technology Data Exchange (ETDEWEB)
Karali, Nihan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-12-12
The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.
Efficient probabilistic model checking on general purpose graphic processors
Bosnacki, D.; Edelkamp, S.; Sulewski, D.; Pasareanu, C.S.
2009-01-01
We present algorithms for parallel probabilistic model checking on general purpose graphic processing units (GPGPUs). For this purpose we exploit the fact that some of the basic algorithms for probabilistic model checking rely on matrix vector multiplication. Since this kind of linear algebraic
An efficient visual saliency detection model based on Ripplet transform
Indian Academy of Sciences (India)
A Diana Andrushia
human visual attention models is still not well investigated. ... Ripplet transform; visual saliency model; Receiver Operating Characteristics (ROC); .... proposed method has the same resolution as that of an input ... regions are obtained, which are independent of their sizes. ..... impact than those far away from the attention.
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Economic efficiency versus social equality? The U.S. liberal model versus the European social model.
Navarro, Vicente; Schmitt, John
2005-01-01
This article begins by challenging the widely held view in neoliberal discourse that there is a necessary trade-off between higher efficiency and lower reduction of inequalities: the article empirically shows that the liberal, U.S. model has been less efficient economically (slower economic growth, higher unemployment) than the social model in existence in the European Union and in the majority of its member states. Based on the data presented, the authors criticize the adoption of features of the liberal model (such as deregulation of their labor markets, reduction of public social expenditures) by some European governments. The second section analyzes the causes for the slowdown of economic growth and the increase of unemployment in the European Union--that is, the application of monetarist and neoliberal policies in the institutional frame of the European Union, including the Stability Pact, the objectives and modus operandi of the European Central Bank, and the very limited resources available to the European Commission for stimulating and distributive functions. The third section details the reasons for these developments, including (besides historical considerations) the enormous influence of financial capital in the E.U. institutions and the very limited democracy. Proposals for change are included.
Management Index Systems and Energy Efficiency Diagnosis Model for Power Plant: Cases in China
Directory of Open Access Journals (Sweden)
Jing-Min Wang
2016-01-01
Full Text Available In recent years, the energy efficiency of thermal power plant largely contributes to that of the industry. A thorough understanding of influencing factors, as well as the establishment of scientific and comprehensive diagnosis model, plays a key role in the operational efficiency and competitiveness for the thermal power plant. Referring to domestic and abroad researches towards energy efficiency management, based on Cloud model and data envelopment analysis (DEA model, a qualitative and quantitative index system and a comprehensive diagnostic model (CDM are construed. To testify rationality and usability of CDM, case studies of large-scaled Chinese thermal power plants have been conducted. In this case, CDM excavates such qualitative factors as technology, management, and so forth. The results shows that, compared with conventional model, which only considered production running parameters, the CDM bears better adaption to reality. It can provide entities with efficient instruments for energy efficiency diagnosis.
Environmental efficiency analysis of power industry in China based on an entropy SBM model
International Nuclear Information System (INIS)
Zhou, Yan; Xing, Xinpeng; Fang, Kuangnan; Liang, Dapeng; Xu, Chunlin
2013-01-01
In order to assess the environmental efficiency of power industry in China, this paper first proposes a new non-radial DEA approach by integrating the entropy weight and the SBM model. This will improve the assessment reliability and reasonableness. Using the model, this study then evaluates the environmental efficiency of the Chinese power industry at the provincial level during 2005–2010. The results show a marked difference in environmental efficiency of the power industry among Chinese provinces. Although the annual, average, environmental efficiency level fluctuates, there is an increasing trend. The Tobit regression analysis reveals the innovation ability of enterprises, the proportion of electricity generated by coal-fired plants and the generation capacity have a significantly positive effect on environmental efficiency. However the waste fees levied on waste discharge and investment in industrial pollutant treatment are negatively associated with environmental efficiency. - Highlights: ► We assess the environmental efficiency of power industry in China by E-SBM model. ► Environmental efficiency of power industry is different among provinces. ► Efficiency stays at a higher level in the eastern and the western area. ► Proportion of coal-fired plants has a positive effect on the efficiency. ► Waste fees and the investment have a negative effect on the efficiency
Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software
Energy Technology Data Exchange (ETDEWEB)
Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael
2013-09-06
This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.
Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling
Kadoura, Ahmad Salim
2016-01-01
This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method
Efficient Delivery of Scalable Video Using a Streaming Class Model
Directory of Open Access Journals (Sweden)
Jason J. Quinlan
2018-03-01
Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by
Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media
Waheed, Umair bin
2014-05-01
Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.
Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media
Waheed, Umair bin; Alkhalifah, Tariq Ali
2014-01-01
Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.
Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency
Aikens, Kurt; Craft, Kyle; Redman, Andrew
2015-11-01
The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
An efficient background modeling approach based on vehicle detection
Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua
2015-10-01
The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.
System convergence in transport models: algorithms efficiency and output uncertainty
DEFF Research Database (Denmark)
Rich, Jeppe; Nielsen, Otto Anker
2015-01-01
of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed...... much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Computationally efficient thermal-mechanical modelling of selective laser melting
Yang, Yabin; Ayas, Can
2017-10-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.
Efficient Lattice-Based Signcryption in Standard Model
Directory of Open Access Journals (Sweden)
Jianhua Yan
2013-01-01
Full Text Available Signcryption is a cryptographic primitive that can perform digital signature and public encryption simultaneously at a significantly reduced cost. This advantage makes it highly useful in many applications. However, most existing signcryption schemes are seriously challenged by the booming of quantum computations. As an interesting stepping stone in the post-quantum cryptographic community, two lattice-based signcryption schemes were proposed recently. But both of them were merely proved to be secure in the random oracle models. Therefore, the main contribution of this paper is to propose a new lattice-based signcryption scheme that can be proved to be secure in the standard model.
More efficient evolutionary strategies for model calibration with watershed model for demonstration
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of
Pengfei, ZHANG; Ling, ZHANG; Zhenwei, WU; Zong, XU; Wei, GAO; Liang, WANG; Qingquan, YANG; Jichan, XU; Jianbin, LIU; Hao, QU; Yong, LIU; Juan, HUANG; Chengrui, WU; Yumei, HOU; Zhao, JIN; J, D. ELDER; Houyang, GUO
2018-04-01
Modeling with OEDGE was carried out to assess the initial and long-term plasma contamination efficiency of Ar puffing from different divertor locations, i.e. the inner divertor, the outer divertor and the dome, in the EAST superconducting tokamak for typical ohmic plasma conditions. It was found that the initial Ar contamination efficiency is dependent on the local plasma conditions at the different gas puff locations. However, it quickly approaches a similar steady state value for Ar recycling efficiency >0.9. OEDGE modeling shows that the final equilibrium Ar contamination efficiency is significantly lower for the more closed lower divertor than that for the upper divertor.
An empirical investigation of the efficiency effects of integrated care models in Switzerland
Directory of Open Access Journals (Sweden)
Oliver Reich
2012-01-01
Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.
An empirical investigation of the efficiency effects of integrated care models in Switzerland
Directory of Open Access Journals (Sweden)
Oliver Reich
2012-01-01
Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.
Herding, minority game, market clearing and efficient markets in a simple spin model framework
Kristoufek, Ladislav; Vosvrda, Miloslav
2018-01-01
We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.
Efficiency of Motivation Development Models for Hygienic Skills
Directory of Open Access Journals (Sweden)
Alexander V. Tscymbalystov
2017-09-01
Full Text Available The combined influence of a family and a state plays an important role in the development of an individual. This study is aimed at the model effectiveness evaluation concerning the development of oral hygiene skills among children living in families (n = 218 and being under the care of a state (n = 229. The groups were created among the children who took part in the study: the preschoolers of 5-7 years, schoolchildren of 8-11 years and adolescents of 12-15 years. During the initial examination, the hygienic status of the oral cavity before and after tooth brushing was evaluated. After that, subgroups were formed in each age group according to three models of hygienic skills training: 1 computer presentation lesson; 2 one of the students acted as a demonstrator of the skill; 3 an individual training by a hygienist. During the next 48 hours children did not take hygienic measures. Then the children were invited for a control session to demonstrate the acquired skills of oral care and evaluate the effectiveness of a model developing the skills of individual oral hygiene. During the control examination, the hygienic status was determined before and after the tooth cleaning, which allowed to determine the regimes of hygienic measure performance for children with different social status and the effectiveness of hygiene training models.
Islamic vs. conventional banks : Business models, efficiency and stability
Beck, T.H.L.; Demirgüc-Kunt, A.; Merrouche, O.
2013-01-01
How different are Islamic banks from conventional banks? Does the recent crisis justify a closer look at the Sharia-compliant business model for banking? When comparing conventional and Islamic banks, controlling for time-variant country-fixed effects, we find few significant differences in business
Operator-based linearization for efficient modeling of geothermal processes
Khait, M.; Voskov, D.V.
2018-01-01
Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
A model for store handling : potential for efficiency improvement
Zelst, van S.M.; Donselaar, van K.H.; Woensel, van T.; Broekmeulen, R.A.C.M.; Fransoo, J.C.
2005-01-01
In retail stores, handling of products typically forms the largest share of the operational costs. The handling activities are mainly the stacking of the products on the shelves. While the impact of these costs on the profitability of a store is substantial, there are no models available of the
Efficient Beam-Type Structural Modeling of Rotor Blades
DEFF Research Database (Denmark)
Couturier, Philippe; Krenk, Steen
2015-01-01
The present paper presents two recently developed numerical formulations which enable accurate representation of the static and dynamic behaviour of wind turbine rotor blades using little modeling and computational effort. The first development consists of an intuitive method to extract fully...... by application to a composite section with bend-twist coupling and a real wind turbine blade....
Computationally efficient thermal-mechanical modelling of selective laser melting
Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam
2017-01-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is
Efficient Proof Engines for Bounded Model Checking of Hybrid Systems
DEFF Research Database (Denmark)
Fränzle, Martin; Herde, Christian
2005-01-01
In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...
Practice What You Preach: Microfinance Business Models and Operational Efficiency
Bos, J.W.B.; Millone, M.M.
The microfinance sector has room for pure for-profit microfinance institutions (MFIs), non-profit organizations, and “social” for-profit firms that aim to pursue a double bottom line. Depending on their business model, these institutions target different types of borrowers, change the size of their
Practice what you preach: Microfinance business models and operational efficiency
Bos, J.W.B.; Millone, M.M.
2013-01-01
The microfinance sector is an example of a sector in which firms with different business models coexist. Next to pure for-profit microfinance institutions (MFIs), the sector has room for non-profit organizations, and includes 'social' for-profit firms that aim to maximize a double bot- tom line and
An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models
DEFF Research Database (Denmark)
Nielsen, Jens Dalgaard; Jaeger, Manfred
2006-01-01
In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...
Efficient Model Selection for Sparse Least-Square SVMs
Directory of Open Access Journals (Sweden)
Xiao-Lei Xia
2013-01-01
Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.
Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model
Acquah, H. de-Graft; Onumah, E. E.
2014-01-01
Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...
International Nuclear Information System (INIS)
Fang, Xiande; Xu, Yu
2011-01-01
The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.
Deformation data modeling through numerical models: an efficient method for tracking magma transport
Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.
2017-12-01
Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.
Management Model for efficient quality control in new buildings
Directory of Open Access Journals (Sweden)
C. E. Rodríguez-Jiménez
2017-09-01
Full Text Available The management of the quality control of each building process is usually set up in Spain from different levels of demand. This work tries to obtain a model of reference, to compare the quality control of the building process of a specific product (building, and to be able to evaluate its warranty level. In the quest of this purpose, we take credit of specialized sources and 153 real cases of Quality Control were carefully revised using a multi-judgment method. Applying different techniques to get a specific valuation (impartial of the input parameters through Delphi’s method (17 experts query, whose matrix treatment with the Fuzzy-QFD tool condenses numerical references through a weighted distribution of the selected functions and their corresponding conditioning factors. The model thus obtained (M153 is useful in order to have a quality control reference to meet the expectations of the quality.
Efficient Transdermal Delivery of Benfotiamine in an Animal Model
Varadi, Gyula; Zhu, Zhen; G. Carter, Stephen
2015-01-01
We designed a transdermal system to serve as a delivery platform for benfotiamine utilizing the attributes of passive penetration enhancing molecules to penetrate through the outer layers of skin combined with the advance of incorporating various peripherally-acting vasodilators to enhance drug uptake. Benfotiamine, incorporated into this transdermal formulation, was applied to skin in an animal model in order to determine the ability to deliver this thiamine pro-drug effectively to the sub-...
Directory of Open Access Journals (Sweden)
Qing Yang
2015-04-01
Full Text Available In order to realize economic and social green development, to pave a pathway towards China’s green regional development and develop effective scientific policy to assist in building green cities and countries, it is necessary to put forward a relatively accurate, scientific and concise green assessment method. The research uses the CCR (A. Charnes & W. W. Cooper & E. Rhodes Data Envelopment Analysis (DEA model to obtain the green development frontier surface based on 31 regions’ annual cross-section data from 2008–2012. Furthermore, in order to classify the regions whereby assessment values equal to 1 in the CCR model, we chose the Super-Efficiency DEA model for further sorting. Meanwhile, according to the five-year panel data, the green development efficiency changes of 31 regions can be manifested by the Malmquist index. Finally, the study assesses the reasons for regional differences; while analyzing and discussing the results may allude to a superior green development pathway for China.
Model-based efficiency evaluation of combine harvester traction drives
Directory of Open Access Journals (Sweden)
Steffen Häberle
2015-08-01
Full Text Available As part of the research the drive train of the combine harvesters is investigated in detail. The focus on load and power distribution, energy consumption and usage distribution are explicitly explored on two test machines. Based on the lessons learned during field operations, model-based studies of energy saving potential in the traction train of combine harvesters can now be quantified. Beyond that the virtual machine trial provides an opportunity to compare innovative drivetrain architectures and control solutions under reproducible conditions. As a result, an evaluation method is presented and generically used to draw comparisons under local representative operating conditions.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo
2017-08-01
Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.
IS CAPM AN EFFICIENT MODEL? ADVANCED VERSUS EMERGING MARKETS
Directory of Open Access Journals (Sweden)
Iulian IHNATOV
2015-10-01
Full Text Available CAPM is one of the financial models most widely used by the investors all over the world for analyzing the correlation between risk and return, being considered a milestone in financial literature. However, in recently years it has been criticized for the unrealistic assumptions it is based on and for the fact that the expected returns it forecasts are wrong. The aim of this paper is to test statistically CAPM for a set of shares listed on New York Stock Exchange, Nasdaq, Warsaw Stock Exchange and Bucharest Stock Exchange (developed markets vs. emerging markets and to compare the expected returns resulted from CAPM with the actually returns. Thereby, we intend to verify whether the model is verified for Central and Eastern Europe capital market, mostly dominated by Poland, and whether the Polish and Romanian stock market index may faithfully be represented as market portfolios. Moreover, we intend to make a comparison between the results for Poland and Romania. After carrying out the analysis, the results confirm that the CAPM is statistically verified for all three capital markets, but it fails to correctly forecast the expected returns. This means that the investors can take wrong investments, bringing large loses to them.
Efficient modeling of photonic crystals with local Hermite polynomials
International Nuclear Information System (INIS)
Boucher, C. R.; Li, Zehao; Albrecht, J. D.; Ram-Mohan, L. R.
2014-01-01
Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (plane wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits
Efficient Transdermal Delivery of Benfotiamine in an Animal Model
Directory of Open Access Journals (Sweden)
Gyula Varadi
2015-01-01
Full Text Available We designed a transdermal system to serve as a delivery platform for benfotiamine utilizing the attributes of passive penetration enhancing molecules to penetrate through the outer layers of skin combined with the advance of incorporating various peripherally-acting vasodilators to enhance drug uptake. Benfotiamine, incorporated into this transdermal formulation, was applied to skin in an animal model in order to determine the ability to deliver this thiamine pro-drug effectively to the sub-epithelial layers. In this proof of concept study in guinea pigs, we found that a single topical application of either a solubilized form of benfotiamine (15 mg or a microcrystalline suspension form (25 mg resulted in considerable increases of the dephosphorylated benfotiamine (S-benzoylthiamine in the skin tissue as well as in significant increases in the thiamine and thiamine phosphate pools compared to control animals. The presence of a ~8000x increase in thiamine and increases in its phosphorylated derivatives in the epidermis and dermis tissue of the test animals gives a strong indication that the topical treatment with benfotiamine works very well for the desired outcome of producing an intracellular increase of the activating cofactor pool for transketolase enzyme, which is implicated in the pathophysiology of diabetic neuropathy.
LEADERSHIP MODELS AND EFFICIENCY IN DECISION CRISIS SITUATIONS, DURING DISASTERS
Directory of Open Access Journals (Sweden)
JAIME RIQUELME CASTAÑEDA
2017-09-01
Full Text Available This article explains how an effective leadership is made on a team during an emergency, during a decision crisis in the context of a disaster. From the approach of the process, we analyze some variables such as ﬂexibility, value congruence, rationality, politicization, and quality of design. To achieve that, we made a ﬁ eld work with the information obtained from the three Emergency headquarters deployed by the Chilean Armed Forces, due to the effects of the 8.8 earthquake on February 27th 2010. The data is analyzed through econometric technics. The results suggested that the original ideas and the rigorous analysis are the keys to secure the quality of the decision. It also, made possible to unveil the fact, that to have efﬁciency in operations in a disaster, it requires a big presence of a vision, mission, and inspiration about a solid and pre-existing base of goals and motivations. Finally, we can ﬁ nd the support to the relationship between kinds of leadership and efﬁciency on crisis decision-making process of the disaster and opens a space to build a decision making theoretic model.
Energy efficiency optimisation for distillation column using artificial neural network models
International Nuclear Information System (INIS)
Osuolale, Funmilayo N.; Zhang, Jie
2016-01-01
This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.
International Nuclear Information System (INIS)
Zheng, Lixing; Deng, Jianqiang
2017-01-01
Highlights: • The ejector distributed-parameter model is developed to study ejector efficiencies. • Feasible component and total efficiency correlations of ejector are established. • New efficiency correlations are applied to obtain dynamic characteristics of EERC. • More suitable fixed efficiency value can be determined by the proposed correlations. - Abstract: In this study we combine the experimental measurement data and the theoretical model of ejector to determine CO 2 ejector component efficiencies including the motive nozzle, suction chamber, mixing section, diffuser as well as the total ejector efficiency. The ejector is modeled utilizing the distributed-parameter method, and the flow passage is divided into a number of elements and the governing equations are formulated based on the differential equation of mass, momentum and energy conservation. The efficiencies of ejector are investigated under different ejector geometric parameters and operational conditions, and the corresponding empirical correlations are established. Moreover, the correlations are incorporated into a transient model of transcritical CO 2 ejector expansion refrigeration cycle (EERC) and the dynamic simulations is performed based on variable component efficiencies and fixed values. The motive nozzle, suction chamber, mixing section and diffuser efficiencies vary from 0.74 to 0.89, 0.86 to 0.96, 0.73 to 0.9 and 0.75 to 0.95 under the studied conditions, respectively. The response diversities of suction flow pressure and discharge pressure are obvious between the variable efficiencies and fixed efficiencies referring to the previous studies, while when the fixed value is determined by the presented correlations, their response differences are basically the same.
2012-09-01
The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...
Energy efficiency model for small/medium geothermal heat pump systems
Directory of Open Access Journals (Sweden)
Staiger Robert
2015-06-01
Full Text Available Heating application efficiency is a crucial point for saving energy and reducing greenhouse gas emissions. Today, EU legal framework conditions clearly define how heating systems should perform, how buildings should be designed in an energy efficient manner and how renewable energy sources should be used. Using heat pumps (HP as an alternative “Renewable Energy System” could be one solution for increasing efficiency, using less energy, reducing the energy dependency and reducing greenhouse gas emissions. This scientific article will take a closer look at the different efficiency dependencies of such geothermal HP (GHP systems for domestic buildings (small/medium HP. Manufacturers of HP appliances must document the efficiency, so called COP (Coefficient of Performance in the EU under certain standards. In technical datasheets of HP appliances, these COP parameters give a clear indication of the performance quality of a HP device. HP efficiency (COP and the efficiency of a working HP system can vary significantly. For this reason, an annual efficiency statistic named “Seasonal Performance Factor” (SPF has been defined to get an overall efficiency for comparing HP Systems. With this indicator, conclusions can be made from an installation, economy, environmental, performance and a risk point of view. A technical and economic HP model shows the dependence of energy efficiency problems in HP systems. To reduce the complexity of the HP model, only the important factors for efficiency dependencies are used. Dynamic and static situations with HP´s and their efficiency are considered. With the latest data from field tests of HP Systems and the practical experience over the last 10 years, this information will be compared with one of the latest simulation programs with the help of two practical geothermal HP system calculations. With the result of the gathered empirical data, it allows for a better estimate of the HP system efficiency, their
International Nuclear Information System (INIS)
Chiu, Yung-Ho; Lee, Jen-Hui; Lu, Ching-Cheng; Shyu, Ming-Kuang; Luo, Zhengying
2012-01-01
This study develops the hybrid meta frontier DEA model for which inputs are distinguished into radial inputs that change proportionally and non-radial inputs that change non-proportionally, in order to measure the technical efficiency and technology gap ratios (TGR) of four different regions: Asia, Africa, America, and Europe. This paper selects 87 countries that are members of the World Energy Council from 2005 to 2007. The input variables are industry and population, while the output variances are gross domestic product (GDP) and the amount of fossil-fuel CO 2 emissions. The result shows that countries’ efficiency ranking among their own region presents more implied volatility. In view of the Technology Gap Ratio, Europe is the most efficient of any region, but during the same period, Asia has a lower efficiency than other regions. Finally, regions with higher industry (or GDP) might not have higher efficiency from 2005 to 2007. And higher CO 2 emissions or population also might not mean lower efficiency for other regions. In addition, Brazil is not OECD member, but it is higher efficiency than other OECD members in emerging countries case. OECD countries are better efficiency than non-OECD countries and Europe is higher than Asia to control CO 2 emissions. If non-OECD countries or Asia countries could reach the best efficiency score, they should try to control CO 2 emissions. - Highlights: ► The new meta frontier Model for evaluating the efficiency and technology gap ratios. ► Higher CO 2 emissions might not lower efficiency than any other regions, like Europe. ► Asia’s output and CO 2 emissions simultaneously increased and lower of its efficiency. ► Non-OECD or Asia countries should control CO 2 emissions to reach best efficiency score.
Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling
Kadoura, Ahmad
2016-09-01
This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method to replace correlations and equations of state in subsurface flow simulators. In order to accelerate MC simulations, a set of early rejection schemes (conservative, hybrid, and non-conservative) in addition to extrapolation methods through reweighting and reconstruction of pre-generated MC Markov chains were developed. Furthermore, an extensive study was conducted to investigate sorption and transport processes of methane, carbon dioxide, water, and their mixtures in the inorganic part of shale using both MC and MD simulations. These simulations covered a wide range of thermodynamic conditions, pore sizes, and fluid compositions shedding light on several interesting findings. For example, the possibility to have more carbon dioxide adsorbed with more preadsorbed water concentrations at relatively large basal spaces. The dissertation is divided into four chapters. The first chapter corresponds to the introductory part where a brief background about molecular simulation and motivations are given. The second chapter is devoted to discuss the theoretical aspects and methodology of the proposed MC speeding up techniques in addition to the corresponding results leading to the successful multi-scale simulation of the compressible single-phase flow scenario. In chapter 3, the results regarding our extensive study on shale gas at laboratory conditions are reported. At the fourth and last chapter, we end the dissertation with few concluding remarks highlighting the key findings and summarizing the future directions.
A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle
Lin, Cheng; Cheng, Xingqun
2014-01-01
Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the tw...
Four shells atomic model to computer the counting efficiency of electron-capture nuclides
International Nuclear Information System (INIS)
Grau Malonda, A.; Fernandez Martinez, A.
1985-01-01
The present paper develops a four-shells atomic model in order to obtain the efficiency of detection in liquid scintillation courting, Mathematical expressions are given to calculate the probabilities of the 229 different atomic rearrangements so as the corresponding effective energies. This new model will permit the study of the influence of the different parameters upon the counting efficiency for nuclides of high atomic number. (Author) 7 refs
International Nuclear Information System (INIS)
Kulikovsky, A.A.
2001-01-01
The efficiency of streamer corona depends on a number of factors such as geometry of electrodes, voltage pulse parameters, gas pressure etc. In a past 5 years a two-dimensional models of streamer in nonuniform fields in air have been developed. These models allow to simulate streamer dynamics and generation of species and to investigate the influence of external parameters on species production. In this work the influence of Laplacian field on efficiency of radicals generation is investigated
Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam
2018-04-01
Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.
A Fuzzy Logic Model to Classify Design Efficiency of Nursing Unit Floors
Directory of Open Access Journals (Sweden)
Tuğçe KAZANASMAZ
2010-01-01
Full Text Available This study was conducted to determine classifications for the planimetric design efficiency of certain public hospitals by developing a fuzzy logic algorithm. Utilizing primary areas and circulation areas from nursing unit floor plans, the study employed triangular membership functions for the fuzzy subsets. The input variables of primary areas per bed and circulation areas per bed were fuzzified in this model. The relationship between input variables and output variable of design efficiency were displayed as a result of fuzzy rules. To test existing nursing unit floors, efficiency output values were obtained and efficiency classes were constructed by this model in accordance with general norms, guidelines and previous studies. The classification of efficiency resulted from the comparison of hospitals.
A method to identify energy efficiency measures for factory systems based on qualitative modeling
Krones, Manuela
2017-01-01
Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...
The evaluation model of the enterprise energy efficiency based on DPSR.
Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan
2017-05-08
The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.
Mathematics model of filtration efficiency of moisture separator for nuclear reactors
International Nuclear Information System (INIS)
Zhang Zhenzhong; Jiang Feng; Huang Yunfeng
2010-01-01
In order to study the filtration mechanism of the moisture separator for water droplet of 5∼10μm, this paper set up a physical model. For the mixed meshes, they can be classified into three types: standard meshes, bur meshes and middle meshes. For all fibers of the wire meshes and vertical fibers of standard mixed meshes, a Kuwabara flow field is used to track the particle to get the inertial impaction efficiency and then calculate the total filtration efficiency of the meshes. For other fibers, besides the Kuwabara flow field, an around-flat flow field is added to calculate the efficiency. Lastly, the total efficiency of the moisture separator according to the equation of the filtration efficiency for the filters in series is compared with the experimental data. The result shows that, under the standard condition,the calculation value is consistent with the experimental efficiency data. (authors)
Mathematical modelling of a steam boiler room to research thermal efficiency
International Nuclear Information System (INIS)
Bujak, J.
2008-01-01
This paper introduces a mathematical model of a boiler room to research its thermal efficiency. The model is regarded as an open thermodynamic system exchanging mass, energy, and heat with the atmosphere. On those grounds, the energy and energy balance were calculated. Here I show several possibilities concerning how this model may be applied. Test results of the coefficient of thermal efficiency were compared to a real object, i.e. a steam boiler room of the Provincial Hospital in Wloclawek (Poland). The tests were carried out for 18 months. The results obtained in the boiler room were used for verification of the mathematical model
International Nuclear Information System (INIS)
Hall, M.L.; Davis, A.B.
2005-01-01
Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown
Practical Validation of Economic Efficiency Modelling Method for Multi-Boiler Heating System
Directory of Open Access Journals (Sweden)
Aleksejs Jurenoks
2017-12-01
Full Text Available In up-to-date conditions information technology is frequently associated with the modelling process, using computer technology as well as information networks. Statistical modelling is one of the most widespread methods of research of economic systems. The selection of methods of modelling of the economic systems depends on a great number of conditions of the researched system. Modelling is frequently associated with the factor of uncertainty (or risk, who’s description goes outside the confines of the traditional statistical modelling, which, in its turn, complicates the modelling which, in its turn, complicates the modelling process. This article describes the modelling process of assessing the economic efficiency of a multi-boiler adaptive heating system in real-time systems which allows for dynamic change in the operation scenarios of system service installations while enhancing the economic efficiency of the system in consideration.
The composite supply chain efficiency model: A case study of the Sishen-Saldanha supply chain
Directory of Open Access Journals (Sweden)
Leila L. Goedhals-Gerber
2016-01-01
Full Text Available As South Africa strives to be a major force in global markets, it is essential that South African supply chains achieve and maintain a competitive advantage. One approach to achieving this is to ensure that South African supply chains maximise their levels of efficiency. Consequently, the efficiency levels of South Africa’s supply chains must be evaluated. The objective of this article is to propose a model that can assist South African industries in becoming internationally competitive by providing them with a tool for evaluating their levels of efficiency both as individual firms and as a component in an overall supply chain. The Composite Supply Chain Efficiency Model (CSCEM was developed to measure supply chain efficiency across supply chains using variables identified as problem areas experienced by South African supply chains. The CSCEM is tested in this article using the Sishen-Saldanda iron ore supply chain as a case study. The results indicate that all three links or nodes along the Sishen-Saldanha iron ore supply chain performed well. The average efficiency of the rail leg was 97.34%, while the average efficiency of the mine and the port were 97% and 95.44%, respectively. The results also show that the CSCEM can be used by South African firms to measure their levels of supply chain efficiency. This article concludes with the benefits of the CSCEM.
Energy efficiency of selected OECD countries: A slacks based model with undesirable outputs
International Nuclear Information System (INIS)
Apergis, Nicholas; Aye, Goodness C.; Barros, Carlos Pestana; Gupta, Rangan; Wanke, Peter
2015-01-01
This paper presents an efficiency assessment of selected OECD countries using a Slacks Based Model with undesirable or bad outputs (SBM-Undesirable). In this research, SBM-Undesirable is used first in a two-stage approach to assess the relative efficiency of OECD countries using the most frequent indicators adopted by the literature on energy efficiency. Besides, in the second stage, GLMM–MCMC methods are combined with SBM-Undesirable results as part of an attempt to produce a model for energy performance with effective predictive ability. The results reveal different impacts of contextual variables, such as economic blocks and capital–labor ratio, on energy efficiency levels. - Highlights: • We analyze the energy efficiency of selected OECD countries. • SBM-Undesirable and MCMC–GLMM are combined for this purpose. • Find that efficiency levels are high but declining over time. • Analysis with contextual variables shows varying efficiency levels across groups. • Capital-intensive countries are more energy efficient than labor-intensive countries.
Kaya, Mine; Hajimirza, Shima
2018-05-25
This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.
Is the Langevin phase equation an efficient model for oscillating neurons?
Ota, Keisuke; Tsunoda, Takamasa; Omori, Toshiaki; Watanabe, Shigeo; Miyakawa, Hiroyoshi; Okada, Masato; Aonishi, Toru
2009-12-01
The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.
Is the Langevin phase equation an efficient model for oscillating neurons?
International Nuclear Information System (INIS)
Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi
2009-01-01
The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.
A Bioeconomic Foundation for the Nutrition-based Efficiency Wage Model
DEFF Research Database (Denmark)
Dalgaard, Carl-Johan Lars; Strulik, Holger
. By extending the model with respect to heterogeneity in worker body size and a physiologically founded impact of body size on productivity, we demonstrate that the nutrition-based efficiency wage model is compatible with the empirical regularity that taller workers simultaneously earn higher wages and are less...
Efficient predictive model-based and fuzzy control for green urban mobility
Jamshidnejad, A.
2017-01-01
In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the
An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model
Energy Technology Data Exchange (ETDEWEB)
Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)
2016-06-15
Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.
An Application on Merton Model in the Non-efficient Market
Feng, Yanan; Xiao, Qingxian
Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.
Design and modeling of an SJ infrared solar cell approaching upper limit of theoretical efficiency
Sahoo, G. S.; Mishra, G. P.
2018-01-01
Recent trends of photovoltaics account for the conversion efficiency limit making them more cost effective. To achieve this we have to leave the golden era of silicon cell and make a path towards III-V compound semiconductor groups to take advantages like bandgap engineering by alloying these compounds. In this work we have used a low bandgap GaSb material and designed a single junction (SJ) cell with a conversion efficiency of 32.98%. SILVACO ATLAS TCAD simulator has been used to simulate the proposed model using both Ray Tracing and Transfer Matrix Method (under 1 sun and 1000 sun of AM1.5G spectrum). A detailed analyses of photogeneration rate, spectral response, potential developed, external quantum efficiency (EQE), internal quantum efficiency (IQE), short-circuit current density (JSC), open-circuit voltage (VOC), fill factor (FF) and conversion efficiency (η) are discussed. The obtained results are compared with previously reported SJ solar cell reports.
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2017-11-01
In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It
Directory of Open Access Journals (Sweden)
Andreas Diomedes Soteriades
2015-10-01
Full Text Available Applying holistic indicators to assess dairy farm efficiency is essential for sustainable milk production. Data Envelopment Analysis (DEA has been instrumental for the calculation of such indicators. However, ‘additive’ DEA models have been rarely used in dairy research. This study presented an additive model known as slacks-based measure (SBM of efficiency and its advantages over DEA models used in most past dairy studies. First, SBM incorporates undesirable outputs as actual outputs of the production process. Second, it identifies the main production factors causing inefficiency. Third, these factors can be ‘priced’ to estimate the cost of inefficiency. The value of SBM for efficiency analyses was demonstrated with a comparison of four contrasting dairy management systems in terms of technical and environmental efficiency. These systems were part of a multiple-year breeding and feeding systems experiment (two genetic lines: select vs. control; and two feeding strategies: high forage vs. low forage, where the latter involved a higher proportion of concentrated feeds where detailed data were collected to strict protocols. The select genetic herd was more technically and environmentally efficient than the control herd, regardless of feeding strategy. However, the efficiency performance of the select herd was more volatile from year to year than that of the control herd. Overall, technical and environmental efficiency were strongly and positively correlated, suggesting that when technically efficient, the four systems were also efficient in terms of undesirable output reduction. Detailed data such as those used in this study are increasingly becoming available for commercial herds through precision farming. Therefore, the methods presented in this study are growing in importance.
Worm gear efficiency model considering misalignment in electric power steering systems
Directory of Open Access Journals (Sweden)
S. H. Kim
2018-05-01
Full Text Available This study proposes a worm gear efficiency model considering misalignment in electric power steering systems. A worm gear is used in Column type Electric Power Steering (C-EPS systems and an Anti-Rattle Spring (ARS is employed in C-EPS systems in order to prevent rattling when the vehicle goes on a bumpy road. This ARS plays a role of preventing rattling by applying preload to one end of the worm shaft but it also generates undesirable friction by causing misalignment of the worm shaft. In order to propose the worm gear efficiency model considering misalignment, geometrical and tribological analyses were performed in this study. For geometrical analysis, normal load on gear teeth was calculated using output torque, pitch diameter of worm wheel, lead angle and normal pressure angle and this normal load was converted to normal pressure at the contact point. Contact points between the tooth flanks of the worm and worm wheel were obtained by mathematically analyzing the geometry, and Hertz's theory was employed in order to calculate contact area at the contact point. Finally, misalignment by an ARS was also considered into the geometry. Friction coefficients between the tooth flanks were also researched in this study. A pin-on-disk type tribometer was set up to measure friction coefficients and friction coefficients at all conditions were measured by the tribometer. In order to validate the worm gear efficiency model, a worm gear was prepared and the efficiency of the worm gear was predicted by the model. As the final procedure of the study, a worm gear efficiency measurement system was set and the efficiency of the worm gear was measured and the results were compared with the predicted results. The efficiency considering misalignment gives more accurate results than the efficiency without misalignment.
Studies of heating efficiencies and models of RF-sheaths for the JET antennae
International Nuclear Information System (INIS)
Hedin, J.
1996-02-01
A theoretical model for the appearance of RF-sheaths is developed to see if this can explain the expected lower heating efficiencies of the new A 2 antennae at JET. The equations are solved numerically. A general method for evaluation of the experimental data of the heating efficiencies of the new antennae at JET is developed and applied for discharges with and without the bumpy limiter on the D antennae. 8 refs, 26 figs
Modelling and design of high efficiency radiation tolerant indium phosphide space solar cells
International Nuclear Information System (INIS)
Goradia, C.; Geier, J.V.; Weinberg, I.
1987-01-01
Using a fairly comprehensive model, the authors did a parametric variation study of the InP shallow homojunction solar cell with a view to determining the maximum realistically achievable efficiency and an optimum design that would yield this efficiency. Their calculations show that with good quality epitaxial material, a BOL efficiency of about 20.3% at 1AMO, 25 0 C may be possible. The design parameters of the near-optimum cell are given. Also presented are the expected radiation damage of the performance parameters by 1MeV electrons and a possible explanation of the high radiation tolerance of InP solar cells
Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.
Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M
2012-04-01
Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.
Min, Ari; Park, Chang Gi; Scott, Linda D
2016-05-23
Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.
Efficient surrogate models for reliability analysis of systems with multiple failure modes
International Nuclear Information System (INIS)
Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran
2011-01-01
Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.
Investigating market efficiency through a forecasting model based on differential equations
de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco
2017-05-01
A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.
Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models
International Nuclear Information System (INIS)
Jošt, D; Škerlavaj, A; Lipej, A
2012-01-01
Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.
Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models
Jošt, D.; Škerlavaj, A.; Lipej, A.
2012-11-01
Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.
A traction control strategy with an efficiency model in a distributed driving electric vehicle.
Lin, Cheng; Cheng, Xingqun
2014-01-01
Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention.
A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle
Lin, Cheng
2014-01-01
Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Directory of Open Access Journals (Sweden)
Faramarz eFaghihi
2015-03-01
Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.
Faghihi, Faramarz; Moustafa, Ahmed A.
2015-01-01
Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189
Complex networks-based energy-efficient evolution model for wireless sensor networks
Energy Technology Data Exchange (ETDEWEB)
Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)
2009-08-30
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
Complex networks-based energy-efficient evolution model for wireless sensor networks
International Nuclear Information System (INIS)
Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun
2009-01-01
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
Generalized Efficient Inference on Factor Models with Long-Range Dependence
DEFF Research Database (Denmark)
Ergemen, Yunus Emre
. Short-memory dynamics are allowed in the common factor structure and possibly heteroskedastic error term. In the estimation, a generalized version of the principal components (PC) approach is proposed to achieve efficiency. Asymptotics for efficient common factor and factor loading as well as long......A dynamic factor model is considered that contains stochastic time trends allowing for stationary and nonstationary long-range dependence. The model nests standard I(0) and I(1) behaviour smoothly in common factors and residuals, removing the necessity of a priori unit-root and stationarity testing...
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
The Super‑efficiency Model and its Use for Ranking and Identification of Outliers
Directory of Open Access Journals (Sweden)
Kristína Kočišová
2017-01-01
Full Text Available This paper employs non‑radial and non‑oriented super‑efficiency SBM model under the assumption of a variable return to scale to analyse performance of twenty‑two Czech and Slovak domestic commercial banks in 2015. The banks were ranked according to asset‑oriented and profit‑oriented intermediation approach. We pooled the cross‑country data and used them to define a common best‑practice efficiency frontier. This allowed us to focus on determining relative differences in efficiency across banks. The average efficiency was evaluated separately on the “national” and “international” level. Based on the results of analysis can be seen that in Slovak banking sector the level of super‑efficiency was lower compared to Czech banks. Also, the number of super‑efficient banks was lower in a case of Slovakia under both approaches. The boxplot analysis was used to determine the outliers in the dataset. The results suggest that the exclusion of outliers led to the better statistical characteristic of estimated efficiency.
A model for improving energy efficiency in industrial motor system using multicriteria analysis
International Nuclear Information System (INIS)
Herrero Sola, Antonio Vanderley; Mota, Caroline Maria de Miranda; Kovaleski, Joao Luiz
2011-01-01
In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: → Lack of decision model in industrial motor system is the main motivation of the research. → A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. → The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.
A model for improving energy efficiency in industrial motor system using multicriteria analysis
Energy Technology Data Exchange (ETDEWEB)
Herrero Sola, Antonio Vanderley, E-mail: sola@utfpr.edu.br [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil); Mota, Caroline Maria de Miranda, E-mail: carolmm@ufpe.br [Federal University of Pernambuco, Cx. Postal 7462, CEP 50630-970, Recife (Brazil); Kovaleski, Joao Luiz [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil)
2011-06-15
In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: > Lack of decision model in industrial motor system is the main motivation of the research. > A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. > The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.
International Nuclear Information System (INIS)
Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.; Marks, Christopher A.; Jassby, David
2017-01-01
Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSE of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.
Wu, Hao; Ihme, Matthias
2017-11-01
The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Directory of Open Access Journals (Sweden)
Arun Arjunan
2015-08-01
Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.
Jaszczur, Marek; Teneta, Janusz; Styszko, Katarzyna; Hassan, Qusay; Burzyńska, Paulina; Marcinek, Ewelina; Łopian, Natalia
2018-04-20
The maximisation of the efficiency of the photovoltaic system is crucial in order to increase the competitiveness of this technology. Unfortunately, several environmental factors in addition to many alterable and unalterable factors can significantly influence the performance of the PV system. Some of the environmental factors that depend on the site have to do with dust, soiling and pollutants. In this study conducted in the city centre of Kraków, Poland, characterised by high pollution and low wind speed, the focus is on the evaluation of the degradation of efficiency of polycrystalline photovoltaic modules due to natural dust deposition. The experimental results that were obtained demonstrated that deposited dust-related efficiency loss gradually increased with the mass and that it follows the exponential. The maximum dust deposition density observed for rainless exposure periods of 1 week exceeds 300 mg/m 2 and the results in efficiency loss were about 2.1%. It was observed that efficiency loss is not only mass-dependent but that it also depends on the dust properties. The small positive effect of the tiny dust layer which slightly increases in surface roughness on the module performance was also observed. The results that were obtained enable the development of a reliable model for the degradation of the efficiency of the PV module caused by dust deposition. The novelty consists in the model, which is easy to apply and which is dependent on the dust mass, for low and moderate naturally deposited dust concentration (up to 1 and 5 g/m 2 and representative for many geographical regions) and which is applicable to the majority of cases met in an urban and non-urban polluted area can be used to evaluate the dust deposition-related derating factor (efficiency loss), which is very much sought after by the system designers, and tools used for computer modelling and system malfunction detection.
A CAD model for energy efficient offshore structures for desalination and energy generation
Directory of Open Access Journals (Sweden)
R. Rahul Dev,
2016-09-01
Full Text Available This paper presents a ‘Computer Aided Design (CAD’ model for energy efficient design of offshore structures. In the CAD model preliminary dimensions and geometric details of an offshore structure (i.e. semi-submersible are optimized to achieve a favorable range of motion to reduce the energy consumed by the ‘Dynamic Position System (DPS’. The presented model allows the designer to select the configuration satisfying the user requirements and integration of Computer Aided Design (CAD and Computational Fluid Dynamics (CFD. The integration of CAD with CFD computes a hydrodynamically and energy efficient hull form. Our results show that the implementation of the present model results into an design that can serve the user specified requirements with less cost and energy consumption.
Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos
2016-11-01
Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns
DEFF Research Database (Denmark)
Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian
2015-01-01
The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...
Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras
Andrade, Jefferson O.; Kameyama, Yukiyoshi
2012-01-01
Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In thi...
Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.
Watanabe, Leandro; Myers, Chris J
2016-08-19
The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.
Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%
Beiley, Zach M.; McGehee, Michael D.
2012-01-01
, that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost
Robust and efficient parameter estimation in dynamic models of biological systems.
Gábor, Attila; Banga, Julio R
2015-10-29
Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.
Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation
Segers, J.; van den Akker, R.; Werker, B.J.M.
2014-01-01
We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of
An efficient fluid–structure interaction model for optimizing twistable flapping wings
Wang, Q.; Goosen, J.F.L.; van Keulen, A.
2017-01-01
Spanwise twist can dominate the deformation of flapping wings and alters the aerodynamic performance and power efficiency of flapping wings by changing the local angle of attack. Traditional Fluid–Structure Interaction (FSI) models, based on Computational Structural Dynamics (CSD) and
Design, characterization and modelling of high efficient solar powered lighting systems
DEFF Research Database (Denmark)
Svane, Frederik; Nymann, Peter; Poulsen, Peter Behrensdorff
2016-01-01
This paper discusses some of the major challenges in the development of L2L (Light-2-Light) products. It’s the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report...
2013-06-11
... Balanced Fund, Compass EMP Multi-Asset Growth Fund, Compass EMP Alternative Strategies Fund, Compass EMP Balanced Volatility Weighted Fund, Compass EMP Growth Volatility Weighted Fund, and Compass EMP... Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application June 4, 2013. AGENCY...
A hybrid version of swan for fast and efficient practical wave modelling
M. Genseberger (Menno); J. Donners
2016-01-01
htmlabstractIn the Netherlands, for coastal and inland water applications, wave modelling with SWAN has become a main ingredient. However, computational times are relatively high. Therefore we investigated the parallel efficiency of the current MPI and OpenMP versions of SWAN. The MPI version is
Navigational efficiency in a biased and correlated random walk model of individual animal movement.
Bailey, Joseph D; Wallis, Jamie; Codling, Edward A
2018-01-01
Understanding how an individual animal is able to navigate through its environment is a key question in movement ecology that can give insight into observed movement patterns and the mechanisms behind them. Efficiency of navigation is important for behavioral processes at a range of different spatio-temporal scales, including foraging and migration. Random walk models provide a standard framework for modeling individual animal movement and navigation. Here we consider a vector-weighted biased and correlated random walk (BCRW) model for directed movement (taxis), where external navigation cues are balanced with forward persistence. We derive a mathematical approximation of the expected navigational efficiency for any BCRW of this form and confirm the model predictions using simulations. We demonstrate how the navigational efficiency is related to the weighting given to forward persistence and external navigation cues, and highlight the counter-intuitive result that for low (but realistic) levels of error on forward persistence, a higher navigational efficiency is achieved by giving more weighting to this indirect navigation cue rather than direct navigational cues. We discuss and interpret the relevance of these results for understanding animal movement and navigation strategies. © 2017 by the Ecological Society of America.
Compilation Of An Econometric Human Resource Efficiency Model For Project Management Best Practices
G. van Zyl; P. Venier
2006-01-01
The aim of the paper is to introduce a human resource efficiency model in order to rank the most important human resource driving forces for project management best practices. The results of the model will demonstrate how the human resource component of project management acts as the primary function to enhance organizational performance, codified through improved logical end-state programmes, work ethics and process contributions. Given the hypothesis that project management best practices i...
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
The relative efficiency of Iranian's rural traffic police: a three-stage DEA model.
Rahimi, Habibollah; Soori, Hamid; Nazari, Seyed Saeed Hashemi; Motevalian, Seyed Abbas; Azar, Adel; Momeni, Eskandar; Javartani, Mehdi
2017-10-13
Road traffic Injuries (RTIs) as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran's rural traffic police efficiency. This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA) model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation) of damage, injury and fatal accidents were 247.7 (258.4), 184.9 (176.9), and 28.7 (19.5), respectively. Input averages were 5.9 (3.0) patrol teams, 0.5% (0.2) manpower proportions, 7.5 (2.9) patrol cars, 0.5 (1.3) motorcycles, 77,279.1 (46,794.7) penalties, 90.9 (2.8) cultural and educational activity score, 0.7 (2.4) speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Iran's traffic police has potential opportunity to reduce
An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Application
DEFF Research Database (Denmark)
Hauberg, Søren; Sloth, Jakob
2008-01-01
For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...
DEFF Research Database (Denmark)
Li, Chunjian; Andersen, Søren Vang
2007-01-01
We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussi...... outputs. The signal models are general and suitable to numerous important signals, such as speech signals and base-band communication signals. Applications to speech analysis and blind channel equalization are given to exemplify the efficiency of the new methods....
Fay, Lindsey; Carll-White, Allison; Schadler, Aric; Isaacs, Kathy B; Real, Kevin
2017-10-01
The focus of this research was to analyze the impact of decentralized and centralized hospital design layouts on the delivery of efficient care and the resultant level of caregiver satisfaction. An interdisciplinary team conducted a multiphased pre- and postoccupancy evaluation of a cardiovascular service line in an academic hospital that moved from a centralized to decentralized model. This study examined the impact of walkability, room usage, allocation of time, and visibility to better understand efficiency in the care environment. A mixed-methods data collection approach was utilized, which included pedometer measurements of staff walking distances, room usage data, time studies in patient rooms and nurses' stations, visibility counts, and staff questionnaires yielding qualitative and quantitative results. Overall, the data comparing the centralized and decentralized models yielded mixed results. This study's centralized design was rated significantly higher in its ability to support teamwork and efficient patient care with decreased staff walking distances. The decentralized unit design was found to positively influence proximity to patients in a larger design footprint and contribute to increased visits to and time spent in patient rooms. Among the factors contributing to caregiver efficiency and satisfaction are nursing station design, an integrated team approach, and the overall physical layout of the space on walkability, allocation of caregiver time, and visibility. However, unit design alone does not solely impact efficiency, suggesting that designers must consider the broader implications of a culture of care and processes.
Directory of Open Access Journals (Sweden)
Ma Zheshu
2009-01-01
Full Text Available Indirectly or externally-fired gas-turbines (IFGT or EFGT are novel technology under development for small and medium scale combined power and heat supplies in combination with micro gas turbine technologies mainly for the utilization of the waste heat from the turbine in a recuperative process and the possibility of burning biomass or 'dirty' fuel by employing a high temperature heat exchanger to avoid the combustion gases passing through the turbine. In this paper, by assuming that all fluid friction losses in the compressor and turbine are quantified by a corresponding isentropic efficiency and all global irreversibilities in the high temperature heat exchanger are taken into account by an effective efficiency, a one dimensional model including power output and cycle efficiency formulation is derived for a class of real IFGT cycles. To illustrate and analyze the effect of operational parameters on IFGT efficiency, detailed numerical analysis and figures are produced. The results summarized by figures show that IFGT cycles are most efficient under low compression ratio ranges (3.0-6.0 and fit for low power output circumstances integrating with micro gas turbine technology. The model derived can be used to analyze and forecast performance of real IFGT configurations.
A theoretical model for prediction of deposition efficiency in cold spraying
International Nuclear Information System (INIS)
Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.
2005-01-01
The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle
Evaluation model of wind energy resources and utilization efficiency of wind farm
Ma, Jie
2018-04-01
Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.
Model of Efficiency Assessment of Regulation In The Banking Seсtor
Directory of Open Access Journals (Sweden)
Irina V. Larionova
2014-01-01
Full Text Available In this article, the modern system of regulation of the national banking sector is viewed, which, according to the author, needs theoretical judgment, structuring, disclosure of the maintenance of efficiency of functioning is considered. The system of regulation reveals on a system basis, it is offered to consider it as set of elements and the mechanism of their interaction which are formed taking into account target reference points of regulation. Thus it is emphasized that for regulation the contradiction is concluded: achievement of financial stability of functioning of the banking sector, as a rule, contains economic growth. The need for development of theoretical ideas of efficiency of regulation of the banking sector gains special relevance taking into account the latest events connected with revocation of licenses of commercial banks on implementation of bank activity, the high cost of credit resources for managing subjects, an insignificant contribution of the banking sector to ensuring rates of economic growth. The author offered criteria of efficiency of regulation of the banking sector to which are referred: functional, operational, social, and economic efficiency. Functional efficiency opens ability of each subsystem of regulation to carry out the functions ordered by the law. Operational efficiency describes correctness suffered by the regulator and commercial banks of the expenses connected with regulating influence. At last, social and economic efficiency is connected with degree of compliance of a field of activity of the banking sector to requirements of national economy, and responsibility of banking business before society. For each criterion of efficiency of regulation of the banking sector the set of the quantitative and quality indicators, allowing to give the corresponding assessment of the working model of crediting is offered. The aggregated expert assessment of the Russian system of regulation of the banking sector
Numerical modelling of high efficiency InAs/GaAs intermediate band solar cell
Imran, Ali; Jiang, Jianliang; Eric, Debora; Yousaf, Muhammad
2018-01-01
Quantum Dots (QDs) intermediate band solar cells (IBSC) are the most attractive candidates for the next generation of photovoltaic applications. In this paper, theoretical model of InAs/GaAs device has been proposed, where we have calculated the effect of variation in the thickness of intrinsic and IB layer on the efficiency of the solar cell using detailed balance theory. IB energies has been optimized for different IB layers thickness. Maximum efficiency 46.6% is calculated for IB material under maximum optical concentration.
Amin, Majdi Talal
Currently, there is no integrated dynamic simulation program for an energy efficient greenhouse coupled with an aquaponic system. This research is intended to promote the thermal management of greenhouses in order to provide sustainable food production with the lowest possible energy use and material waste. A brief introduction of greenhouses, passive houses, energy efficiency, renewable energy systems, and their applications are included for ready reference. An experimental working scaled-down energy-efficient greenhouse was built to verify and calibrate the results of a dynamic simulation model made using TRNSYS software. However, TRNSYS requires the aid of Google SketchUp to develop 3D building geometry. The simulation model was built following the passive house standard as closely as possible. The new simulation model was then utilized to design an actual greenhouse with Aquaponics. It was demonstrated that the passive house standard can be applied to improve upon conventional greenhouse performance, and that it is adaptable to different climates. The energy-efficient greenhouse provides the required thermal environment for fish and plant growth, while eliminating the need for conventional cooling and heating systems.
A resource allocation model to support efficient air quality management in South Africa
Directory of Open Access Journals (Sweden)
U Govender
2009-06-01
Full Text Available Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz. National, Provincial, and Local Departments of Environmental Management in South Africa, with the view to develop a resource allocation model. The inputs to the model were calculated from the actual man-hours spent on twelve selected activities relating to project management, knowledge management and change management. The outputs assessed were aligned to the requirements of the mandates of these Departments. Several models were explored using multiple regressions and stepwise techniques. The model that best explained the efficiency of the organisations from the input data was selected. Logistic regression analysis was identified as the most appropriate tool. This model is used to predict the required resources per Air Quality Unit in the different spheres of government in an attempt at supporting and empowering the air quality regime to achieve improved output efficiency.
Energy Technology Data Exchange (ETDEWEB)
Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi
2016-11-15
Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.
Modeling of non-linear CHP efficiency curves in distributed energy systems
DEFF Research Database (Denmark)
Milan, Christian; Stadler, Michael; Cardoso, Gonçalo
2015-01-01
Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung
2017-01-01
The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals.
Directory of Open Access Journals (Sweden)
Morteza Rahmani
2017-03-01
Full Text Available Supplier selection in supply chain as a multi-criteria decision making problem (containing both qualitative and quantitative criteria is one of the main factors in a successful supply chain. To this purpose, Toloo and Nalchigar (2011 proposed an integrated data envelopment analysis (DEA model to find the most efficient (best supplier by considering imprecise data. In this paper, it will be shown that their model randomly selects an efficient supplier as the most efficient and therefore their model cannot find the most efficient supplier correctly. We also explain some other problems in this model and propose a modified model to resolve the drawbacks. The proposed model in this paper finds the most efficient supplier considering imprecise data by solving only one mixed integer linear programming. In addition, a new algorithm is proposed for determining and ranking other efficient suppliers. Afficiency of the proposed approach is explained by considering imprecise data for 18 suppliers.
International Nuclear Information System (INIS)
Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock
2016-01-01
Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.
Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara
2015-04-01
Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.
Paprotny, D.; Morales Napoles, O.; Jonkman, S.N.
2017-01-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood
Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank
2014-01-01
In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.
Directory of Open Access Journals (Sweden)
Cao Shi
2017-01-01
Full Text Available Due to the shortcomings of current power transmission which is used in ultrasound - assisted machining and the different transfer efficiency caused by the related parameters of the electromagnetic converter, this paper proposes an analysis model of the new non-contact power transmission device with more stable output and higher transmission efficiency. Then By utilizing Maxwell finite element analysis software, this paper studies the law of the transfer efficiency of the new non-contact transformer and compares new type with traditional type with the method of setting the boundary conditions of non-contact power supply device. At last, combining with the practical application, the relevant requirements which have a certain reference value in the application are put forward in the actual processing.
Besstremyannaya, Galina
2011-09-01
The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.
An efficient soil water balance model based on hybrid numerical and statistical methods
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new
Directory of Open Access Journals (Sweden)
Hossein Jafari Mansoorian
2017-01-01
Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.
Modelling the effects of transport policy levers on fuel efficiency and national fuel consumption
International Nuclear Information System (INIS)
Kirby, H.R.; Hutton, B.; McQuaid, R.W.; Napier Univ., Edinburgh; Raeside, R.; Napier Univ., Edinburgh; Zhang, Xiayoan; Napier Univ., Edinburgh
2000-01-01
The paper provides an overview of the main features of a Vehicle Market Model (VMM) which estimates changes to vehicle stock/kilometrage, fuel consumed and CO 2 emitted. It is disaggregated into four basic vehicle types. The model includes: the trends in fuel consumption of new cars, including the role of fuel price: a sub-model to estimate the fuel consumption of vehicles on roads characterised by user-defined driving cycle regimes; procedures that reflect distribution of traffic across different area/road types; and the ability to vary the speed (or driving cycle) from one year to another, or as a result of traffic growth. The most significant variable influencing fuel consumption of vehicles was consumption in the previous year, followed by dummy variables related to engine size. the time trend (a proxy for technological improvements), and then fuel price. Indeed the effect of fuel price on car fuel efficiency was observed to be insignificant (at the 95% level) in two of the three versions of the model, and the size of fuel price term was also the smallest. This suggests that the effectiveness of using fuel prices as a direct policy tool to reduce fuel consumption may he limited. Fuel prices may have significant indirect impacts (such as influencing people to purchase more fuel efficient cars and vehicle manufacturers to invest in developing fuel efficient technology) as may other factors such as the threat of legislation. (Author)
Directory of Open Access Journals (Sweden)
Davood Shishebori
2013-01-01
Full Text Available Nowadays, the efficient design of medical service systems plays a critical role in improving the performance and efficiency of medical services provided by governments. Accordingly, health care planners in countries especially with a system based on a National Health Service (NHS try to make decisions on where to locate and how to organize medical services regarding several conditions in different residence areas, so as to improve the geographic equity of comfortable access in the delivery of medical services while accounting for efficiency and cost issues especially in crucial situations. Therefore, optimally locating of such services and also suitable allocating demands them, can help to enhance the performance and responsiveness of medical services system. In this paper, a multiobjective mixed integer nonlinear programming model is proposed to decide locations of new medical system centers, link roads that should be constructed or improved, and also urban residence centers covered by these medical service centers and link roads under investment budget constraint in order to both minimize the total transportation cost of the overall system and minimize the total failure cost (i.e., maximize the system reliability of medical service centers under unforeseen situations. Then, the proposed model is linearized by suitable techniques. Moreover, a practical case study is presented in detail to illustrate the application of the proposed mathematical model. Finally, a sensitivity analysis is done to provide an insight into the behavior of the proposed model in response to changes of key parameters of the problem.
SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps
Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang
2018-06-01
SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Modeling Energy Efficiency As A Green Logistics Component In Vehicle Assembly Line
Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer
2016-11-01
This paper uses System Dynamics (SD) simulation to investigate the concept green logistics in terms of energy efficiency in automotive industry. The car manufacturing industry is considered to be one of the highest energy consuming industries. An efficient decision making model is proposed that capture the impacts of strategic decisions on energy consumption and environmental sustainability. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. SD is the chosen simulation method and the main green logistics issues considered are Carbon Dioxide (CO2) emission and energy utilization. The model will assist decision makers acquire an in-depth understanding of relationship between high level planning and low level operation activities on production, environmental impacts and costs associated. The results of the SD model signify the existence of positive trade-offs between green practices of energy efficiency and the reduction of CO2 emission.
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
Directory of Open Access Journals (Sweden)
J. G. Barr
2013-03-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
Directory of Open Access Journals (Sweden)
Bryan Howell
Full Text Available Spinal cord stimulation (SCS is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
Directory of Open Access Journals (Sweden)
Hepburn Iain
2012-05-01
Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates
Measuring China’s regional energy and carbon emission efficiency with DEA models: A survey
International Nuclear Information System (INIS)
Meng, Fanyi; Su, Bin; Thomson, Elspeth; Zhou, Dequn; Zhou, P.
2016-01-01
Highlights: • China’s regional efficiency studies using data envelopment analysis are reviewed. • The main features of 46 studies published in 2006–2015 are summarized. • Six models are compared from the perspective of methodology and empirical results. • Empirical study of China’s 30 regional efficiency assessment in 1995–2012 is presented. - Abstract: The use of data envelopment analysis (DEA) in China’s regional energy efficiency and carbon emission efficiency (EE&CE) assessment has received increasing attention in recent years. This paper conducted a comprehensive survey of empirical studies published in 2006–2015 on China’s regional EE&CE assessment using DEA-type models. The main features used in previous studies were identified, and then the methodological framework for deriving the EE&CE indicators as well as six widely used DEA models were introduced. These DEA models were compared and applied to measure China’s regional EE&CE in 30 provinces/regions between 1995 and 2012. The empirical study indicates that China’s regional EE&CE remained stable in the 9th Five Year Plan (1996–2000), then decreased in the 10th Five Year Plan (2000–2005), and increased a bit in the 11th Five Year Plan (2006–2010). The east region of China had the highest EE&CE while the central area had the lowest. By way of conclusion, some useful points relating to model selection are summarized from both methodological and empirical aspects.
Efficient view based 3-D object retrieval using Hidden Markov Model
Jain, Yogendra Kumar; Singh, Roshan Kumar
2013-12-01
Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.
Modeling energy efficiency to improve air quality and health effects of China’s cement industry
International Nuclear Information System (INIS)
Zhang, Shaohui; Worrell, Ernst; Crijns-Graus, Wina; Krol, Maarten; Bruine, Marco de; Geng, Guangpo; Wagner, Fabian; Cofala, Janusz
2016-01-01
Highlights: • An integrated model was used to model the co-benefits for China’s cement industry. • PM_2_._5 would decrease by 2–4% by 2030 through improved energy efficiency. • 10,000 premature deaths would be avoided per year relative to the baseline scenario. • Total benefits are about two times higher than the energy efficiency costs. - Abstract: Actions to reduce the combustion of fossil fuels often decrease GHG emissions as well as air pollutants and bring multiple benefits for improvement of energy efficiency, climate change, and air quality associated with human health benefits. The China’s cement industry is the second largest energy consumer and key emitter of CO_2 and air pollutants, which accounts for 7% of China’s total energy consumption, 15% of CO_2, and 14% of PM_2_._5, respectively. In this study, a state-of-the art modeling framework is developed that comprises a number of different methods and tools within the same platform (i.e. provincial energy conservation supply curves, the Greenhouse Gases and Air Pollution Interactions and Synergies, ArcGIS, the global chemistry Transport Model, version 5, and Health Impact Assessment) to assess the potential for energy savings and emission mitigation of CO_2 and PM_2_._5, as well as the health impacts of pollution arising from China’s cement industry. The results show significant heterogeneity across provinces in terms of the potential for PM_2_._5 emission reduction and PM_2_._5 concentration, as well as health impacts caused by PM_2_._5. Implementation of selected energy efficiency measures would decrease total PM_2_._5 emissions by 2% (range: 1–4%) in 2020 and 4% (range: 2–8%) by 2030, compared to the baseline scenario. The reduction potential of provincial annual PM_2_._5 concentrations range from 0.03% to 2.21% by 2030 respectively, when compared to the baseline scenario. 10,000 premature deaths are avoided by 2020 and 2030 respectively relative to baseline scenario. The
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Directory of Open Access Journals (Sweden)
Md Zobaer Hasan
Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Efficient scatter model for simulation of ultrasound images from computed tomography data
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Gas dynamic design of the pipe line compressor with 90% efficiency. Model test approval
Galerkin, Y.; Rekstin, A.; Soldatova, K.
2015-08-01
Gas dynamic design of the pipe line compressor 32 MW was made for PAO SMPO (Sumy, Ukraine). The technical specification requires compressor efficiency of 90%. The customer offered favorable scheme - single-stage design with console impeller and axial inlet. The authors used the standard optimization methodology of 2D impellers. The original methodology of internal scroll profiling was used to minimize efficiency losses. Radically improved 5th version of the Universal modeling method computer programs was used for precise calculation of expected performances. The customer fulfilled model tests in a 1:2 scale. Tests confirmed the calculated parameters at the design point (maximum efficiency of 90%) and in the whole range of flow rates. As far as the authors know none of compressors have achieved such efficiency. The principles and methods of gas-dynamic design are presented below. The data of the 32 MW compressor presented by the customer in their report at the 16th International Compressor conference (September 2014, Saint- Petersburg) and later transferred to the authors.
Xu, Zhiqiang
2017-02-16
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke
2017-01-01
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
Energy Technology Data Exchange (ETDEWEB)
Felice, Maria V., E-mail: maria.felice@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol, U.K. and NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom); Velichko, Alexander, E-mail: p.wilcox@bristol.ac.uk; Wilcox, Paul D., E-mail: p.wilcox@bristol.ac.uk [Department of Mechanical Engineering, University of Bristol, Bristol (United Kingdom); Barden, Tim; Dunhill, Tony [NDE Laboratory, Rolls-Royce plc., Bristol (United Kingdom)
2015-03-31
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
Seaman, Shaun R; Hughes, Rachael A
2018-06-01
Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.
Modelling household responses to energy efficiency interventions via system dynamics and survey data
Directory of Open Access Journals (Sweden)
S Davis
2010-12-01
Full Text Available An application of building a system dynamics model of the way households might respond to interventions aimed at reducing energy consumption (specifically the use of electricity is described in this paper. A literature review of past research is used to build an initial integrated model of household consumption, and this model is used to generate a small number of research hypotheses about how households possessing different characteristics might react to various types of interventions. These hypotheses are tested using data gathered from an efficiency intervention conducted in a town in the South African Western Cape in which households were able to exchange regular light bulbs for more efficient compact fluorescent lamp light bulbs. Our experiences are (a that a system dynamics approach proved useful in advancing a non-traditional point of view for which, for historical and economic reasons, data were not abundantly available; (b that, in areas where traditional models are heavily quantitative, some scepticism to a system dynamics model may be expected; and (c that a statistical comparison of model results by means of empirical data may be an effective tool in reducing such scepticism.
An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited
International Nuclear Information System (INIS)
Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P
2017-01-01
An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)
Efficient 3D frequency response modeling with spectral accuracy by the rapid expansion method
Chu, Chunlei
2012-07-01
Frequency responses of seismic wave propagation can be obtained either by directly solving the frequency domain wave equations or by transforming the time domain wavefields using the Fourier transform. The former approach requires solving systems of linear equations, which becomes progressively difficult to tackle for larger scale models and for higher frequency components. On the contrary, the latter approach can be efficiently implemented using explicit time integration methods in conjunction with running summations as the computation progresses. Commonly used explicit time integration methods correspond to the truncated Taylor series approximations that can cause significant errors for large time steps. The rapid expansion method (REM) uses the Chebyshev expansion and offers an optimal solution to the second-order-in-time wave equations. When applying the Fourier transform to the time domain wavefield solution computed by the REM, we can derive a frequency response modeling formula that has the same form as the original time domain REM equation but with different summation coefficients. In particular, the summation coefficients for the frequency response modeling formula corresponds to the Fourier transform of those for the time domain modeling equation. As a result, we can directly compute frequency responses from the Chebyshev expansion polynomials rather than the time domain wavefield snapshots as do other time domain frequency response modeling methods. When combined with the pseudospectral method in space, this new frequency response modeling method can produce spectrally accurate results with high efficiency. © 2012 Society of Exploration Geophysicists.
Energy-Efficiency Retrofits in Small-Scale Multifamily Rental Housing: A Business Model
DeChambeau, Brian
The goal of this thesis to develop a real estate investment model that creates a financial incentive for property owners to perform energy efficiency retrofits in small multifamily rental housing in southern New England. The medium for this argument is a business plan that is backed by a review of the literature and input from industry experts. In addition to industry expertise, the research covers four main areas: the context of green building, efficient building technologies, precedent programs, and the Providence, RI real estate market for the business plan. The thesis concludes that the model proposed can improve the profitability of real estate investment in small multifamily rental properties, though the extent to which this is possible depends partially on utility-run incentive programs and the capital available to invest in retrofit measures.
Weng Hoe, Lam; Jinn, Lim Shun; Weng Siew, Lam; Hai, Tey Kim
2018-04-01
In Malaysia, construction sector is essential parts in driving the development of the Malaysian economy. Construction industry is an economic investment and its relationship with economic development is well posited. However, the evaluation on the efficiency of the construction sectors companies listed in Kuala Lumpur Stock Exchange (KLSE) with Data Analysis Envelopment (DEA) model have not been actively studied by the past researchers. Hence the purpose of this study is to examine the financial performance the listed construction sectors companies in Malaysia in the year of 2015. The results of this study show that the efficiency of construction sectors companies can be obtained by using DEA model through ratio analysis which defined as the ratio of total outputs to total inputs. This study is significant because the inefficient companies are identified for potential improvement.
Data-driven modeling and real-time distributed control for energy efficient manufacturing systems
International Nuclear Information System (INIS)
Zou, Jing; Chang, Qing; Arinez, Jorge; Xiao, Guoxian
2017-01-01
As manufacturers face the challenges of increasing global competition and energy saving requirements, it is imperative to seek out opportunities to reduce energy waste and overall cost. In this paper, a novel data-driven stochastic manufacturing system modeling method is proposed to identify and predict energy saving opportunities and their impact on production. A real-time distributed feedback production control policy, which integrates the current and predicted system performance, is established to improve the overall profit and energy efficiency. A case study is presented to demonstrate the effectiveness of the proposed control policy. - Highlights: • A data-driven stochastic manufacturing system model is proposed. • Real-time system performance and energy saving opportunity identification method is developed. • Prediction method for future potential system performance and energy saving opportunity is developed. • A real-time distributed feedback control policy is established to improve energy efficiency and overall system profit.
An Empirical LTE Smartphone Power Model with a View to Energy Efficiency Evolution
DEFF Research Database (Denmark)
Lauridsen, Mads; Noël, Laurent; Sørensen, Troels Bundgaard
2014-01-01
measurements made on state-of-the-art LTE smartphones. Discontinuous Reception (DRX) sleep mode is also modeled, because it is one of the most effective methods to improve smartphone battery life. Energy efficiency has generally improved with each Radio Access Technology (RAT) generation, and to see......Smartphone users struggle with short battery life, and this affects their device satisfaction level and usage of the network. To evaluate how chipset manufacturers and mobile network operators can improve the battery life, we propose a Long Term Evolution (LTE) smartphone power model. The idea...... this evolution, we compare the energy efficiency of the latest LTE devices with devices based on Enhanced Data rates for GSM Evolution (EDGE), High Speed Packet Access (HSPA), and Wi-Fi*. With further generations of RAT systems we expect further improvements. To this end, we discuss the new LTE features, Carrier...
A relativistic model of electron cyclotron current drive efficiency in tokamak plasmas
Directory of Open Access Journals (Sweden)
Lin-Liu Y.R.
2012-09-01
Full Text Available A fully relativistic model of electron cyclotron current drive (ECCD efficiency based on the adjoint function techniques is considered. Numerical calculations of the current drive efficiency in a tokamak by using the variational approach are performed. A fully relativistic extension of the variational principle with the modified basis functions for the Spitzer function with momentum conservation in the electron-electron collision is described in general tokamak geometry. The model developed has generalized that of Marushchenko’s (N.B . Marushchenko, et al. Fusion Sci. & Tech., 2009, which is extended for arbitrary temperatures and covers exactly the asymptotic for u ≫ 1 when Z → ∞, and suitable for ray-tracing calculations.
An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping
Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare
2017-04-01
Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which
Soft Sensors: Chemoinformatic Model for Efficient Control and Operation in Chemical Plants.
Funatsu, Kimito
2016-12-01
Soft sensor is statistical model as an essential tool for controlling pharmaceutical, chemical and industrial plants. I introduce soft sensor, the roles, the applications, the problems and the research examples such as adaptive soft sensor, database monitoring and efficient process control. The use of soft sensor enables chemical industrial plants to be operated more effectively and stably. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Model-Based Analysis and Efficient Operation of a Glucose Isomerization Reactor Plant
DEFF Research Database (Denmark)
Papadakis, Emmanouil; Madsen, Ulrich; Pedersen, Sven
2015-01-01
efficiency. The objective of this study is the application of the developed framework on an industrial case study of a glucose isomerization (GI) reactor plant that is part of a corn refinery, with the objective to improve the productivity of the process. Therefore, a multi-scale reactor model...... is developedfor use as a building block for the GI reactor plant simulation. An optimal operation strategy is proposed on the basis of the simulation results...
International Nuclear Information System (INIS)
Casas Galiano, G.; Grau Malonda, A.
1994-01-01
An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high
Herding, minority game, market clearing and efficient markets in a simple spin model framework
Czech Academy of Sciences Publication Activity Database
Krištoufek, Ladislav; Vošvrda, Miloslav
2017-01-01
Roč. 54, č. 1 (2017), s. 148-155 ISSN 1007-5704 R&D Projects: GA ČR(CZ) GBP402/12/G097 EU Projects: European Commission(XE) 612955 - FINMAP Institutional support: RVO:67985556 Keywords : Ising model * Efficient market hypothesis * Monte Carlo simulation Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 2.784, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kristoufek-0474986.pdf
Fuzzy-DEA model for measuring the efficiency of transport quality
Directory of Open Access Journals (Sweden)
Dragan S. Pamučar
2011-10-01
Full Text Available Data envelopment analysis (DEA is becoming increasingly important as a tool for evaluating and improving the performance of manufacturing and service operations. It has been extensively applied in performance evaluation and benchmarking of schools, hospitals, bank branches, production plants, etc. DEA enables mathematical programming for implicit evaluation of the ratio between a number of input and output performance parameters. The result is quantification of the efficiency of business opportunities and providing insight into some flaws from the level of top management. Levels of efficiency determined under the same parametres make this analytical process objective and allow for the application of best practices based on the assessment of the overall efficiency. This paper presents a fuzzy-DEA model for evaluating the effectiveness of urban and suburban public transport- USPT. A fuzzy-DEA model provides insight into the current transport quality provided by USPT and proposes for the improvement of inefficient systems up to the level of best standards possible. Such quantification makes long-term stability of USPT possible. Since most of the acquired data is characterized by a high degree of imprecision, subjectivity and uncertainty, fuzzy logic was used for displaying them. Fuzzy linguistic descriptors are given in the output parameters of DEA models. In this way, fuzzy logic enables the exploitation of tolerance that exists in imprecision, uncertainty and partial accuracy of the acquired research results.
Pincus, R.; Mlawer, E. J.
2017-12-01
Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.
Directory of Open Access Journals (Sweden)
Hakan Gunes
2016-12-01
Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear
Compilation Of An Econometric Human Resource Efficiency Model For Project Management Best Practices
Directory of Open Access Journals (Sweden)
G. van Zyl
2006-11-01
Full Text Available The aim of the paper is to introduce a human resource efficiency model in order to rank the most important human resource driving forces for project management best practices. The results of the model will demonstrate how the human resource component of project management acts as the primary function to enhance organizational performance, codified through improved logical end-state programmes, work ethics and process contributions. Given the hypothesis that project management best practices involve significant human resource and organizational changes, one would reasonably expect this process to influence and resonate throughout all the dimensions of an organisation.
Directory of Open Access Journals (Sweden)
Liang Tang
2010-01-01
Full Text Available A mathematical model for M/G/1-type queueing networks with multiple user applications and limited resources is established. The goal is to develop a dynamic distributed algorithm for this model, which supports all data traffic as efficiently as possible and makes optimally fair decisions about how to minimize the network performance cost. An online policy gradient optimization algorithm based on a single sample path is provided to avoid suffering from a “curse of dimensionality”. The asymptotic convergence properties of this algorithm are proved. Numerical examples provide valuable insights for bridging mathematical theory with engineering practice.
Energy Technology Data Exchange (ETDEWEB)
Lumb, Matthew P. [The George Washington University, 2121 I Street NW, Washington, DC 20037 (United States); Naval Research Laboratory, Washington, DC 20375 (United States); Steiner, Myles A.; Geisz, John F. [National Renewable Energy Laboratory, Golden, Colorado 80401 (United States); Walters, Robert J. [Naval Research Laboratory, Washington, DC 20375 (United States)
2014-11-21
The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close to the fundamental efficiency limit.
Model based design of efficient power take-off systems for wave energy converters
DEFF Research Database (Denmark)
Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.
2011-01-01
The Power Take-Off (PTO) is the core of a Wave Energy Converter (WECs), being the technology converting wave induced oscillations from mechanical energy to electricity. The induced oscillations are characterized by being slow with varying frequency and amplitude. Resultantly, fluid power is often...... an essential part of the PTO, being the only technology having the required force densities. The focus of this paper is to show the achievable efficiency of a PTO system based on a conventional hydro-static transmission topology. The design is performed using a model based approach. Generic component models...
An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows
Liang, Tengfei
2013-01-01
Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.
A Calibrated Lumped Element Model for the Prediction of PSJ Actuator Efficiency Performance
Directory of Open Access Journals (Sweden)
Matteo Chiatto
2018-03-01
Full Text Available Among the various active flow control techniques, Plasma Synthetic Jet (PSJ actuators, or Sparkjets, represent a very promising technology, especially because of their high velocities and short response times. A practical tool, employed for design and manufacturing purposes, consists of the definition of a low-order model, lumped element model (LEM, which is able to predict the dynamic response of the actuator in a relatively quick way and with reasonable fidelity and accuracy. After a brief description of an innovative lumped model, this work faces the experimental investigation of a home-designed and manufactured PSJ actuator, for different frequencies and energy discharges. Particular attention has been taken in the power supply system design. A specific home-made Pitot tube has allowed the detection of velocity profiles along the jet radial direction, for various energy discharges, as well as the tuning of the lumped model with experimental data, where the total device efficiency has been assumed as a fitting parameter. The best fitting value not only contains information on the actual device efficiency, but includes some modeling and experimental uncertainties, related also to the used measurement technique.
Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras
Andrade, Jefferson O.; Kameyama, Yukiyoshi
Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In this paper, we propose a novel, efficient algorithm for multi-valued bounded model checking. A notable feature of our algorithm is that it is not based on reduction of multi-values into two-values; instead, it generates a single formula which represents multi-valuedness by a suitable encoding, and asks a standard SAT solver to check its satisfiability. Our experimental results show a significant improvement in the number of variables and clauses and also in execution time compared with the reduction-based one.
Optimization model for school transportation design based on economic and social efficiency
Energy Technology Data Exchange (ETDEWEB)
Heddebaut, O.; Ciommo, F. di
2016-07-01
The purpose of this paper is to design a model that allows to suggest new planning proposals on school transport, so that greater efficiency operational will be achieved. It is a multi-objective optimization problem including the minimization of the cost of busing and minimizes the total travel time of all students. The foundation of the model is the planning routes made by bus due to changes in the starting time in schools, so the buses are able to perform more than one route. The methodology is based on the School Bus Routing Problem, so that routes from different schools within a given time window are connected, and within the restrictions of the problem, the system costs are minimized. The proposed model is programmed to be applied in any generic case. This is a multi-objective problem, in which there will be several possible solutions, depending on the weight to be assigned to each of the variables involved, economic point of view versus social point of view. Therefore, the proposed model is helpful for policy planning school transportation, supporting the decision making under conditions of economic and social efficiency. The model has been applied in some schools located in an area of Cantabria (Spain), resulting in 71 possible optimal options that minimize the cost of school transport between 2,7% and 35,1% regarding to the current routes of school transport, with different school start time and minimum travel time for students. (Author)
International Nuclear Information System (INIS)
Sewsynker-Sukai, Yeshona; Faloye, Funmilayo; Kana, Evariste Bosco Gueguim
2016-01-01
In view of the looming energy crisis as a result of depleting fossil fuel resources and environmental concerns from greenhouse gas emissions, the need for sustainable energy sources has secured global attention. Research is currently focused towards renewable sources of energy due to their availability and environmental friendliness. Biofuel production like other bioprocesses is controlled by several process parameters including pH, temperature and substrate concentration; however, the improvement of biofuel production requires a robust process model that accurately relates the effect of input variables to the process output. Artificial neural networks (ANNs) have emerged as a tool for modelling complex, non-linear processes. ANNs are applied in the prediction of various processes; they are useful for virtual experimentations and can potentially enhance bioprocess research and development. In this study, recent findings on the application of ANN for the modelling and optimization of biohydrogen, biogas, biodiesel, microbial fuel cell technology and bioethanol are reviewed. In addition, comparative studies on the modelling efficiency of ANN and other techniques such as the response surface methodology are briefly discussed. The review highlights the efficiency of ANNs as a modelling and optimization tool in biofuel process development
Hydrothermal modeling for the efficient design of thermal loading in a nuclear waste repository
International Nuclear Information System (INIS)
Cho, Won-Jin; Kim, Jin-Seop; Choi, Heui-Joo
2014-01-01
Highlights: • Three-dimensional hydrothermal modeling for HLW repository is performed. • The model reduces the peak temperature in the repository by about 10 °C. • Decreasing the tunnel distance is more efficient to improve the disposal density. • The EDZ surrounding the deposition hole increases the peak temperature. • The peak temperature for the double-layer repository remains below the limit. - Abstract: The thermal analysis of a geological repository for nuclear waste using the three-dimensional hydrothermal model is performed. The hydrothermal model reduces the maximum peak temperature in the repository by about 10 °C compared to the heat conduction model with constant thermal conductivities. Decreasing the tunnel distance is more efficient than decreasing the deposition hole spacing to improve the disposal density for a given thermal load. The annular excavation damaged zone surrounding the deposition hole has a considerable effect on the peak temperature. The possibility of double-layer repository is analyzed from the viewpoint of the thermal constraints of the repository. The maximum peak temperature for the double-layer repository is slightly higher than that for the single-layer repository, but remains below the temperature limit
International Nuclear Information System (INIS)
Arabi, Behrouz; Munisamy, Susila; Emrouznejad, Ali; Shadman, Foroogh
2014-01-01
Measuring variations in efficiency and its extension, eco-efficiency, during a restructuring period in different industries has always been a point of interest for regulators and policy makers. This paper assesses the impacts of restructuring of procurement in the Iranian power industry on the performance of power plants. We introduce a new slacks-based model for Malmquist–Luenberger (ML) Index measurement and apply it to the power plants to calculate the efficiency, eco-efficiency, and technological changes over the 8-year period (2003–2010) of restructuring in the power industry. The results reveal that although the restructuring had different effects on the individual power plants, the overall growth in the eco-efficiency of the sector was mainly due to advances in pure technology. We also assess the correlation between efficiency and eco-efficiency of the power plants, which indicates a close relationship between these two steps, thus lending support to the incorporation of environmental factors in efficiency analysis. - Highlights: • We introduce a new slack-based model incorporating bad outputs to measure eco-efficiency. • Eco-efficiency change of power plants is measured over a restructuring period. • A success to enhance the eco-efficiency is revealed. • A close relationship between efficiency and eco-efficiency is shown
Efficient and robust estimation for longitudinal mixed models for binary data
DEFF Research Database (Denmark)
Holst, René
2009-01-01
This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...
Takács, Gergely
2012-01-01
Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: · the implementation of ...
Directory of Open Access Journals (Sweden)
Sie Long Kek
2015-01-01
Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.
International Nuclear Information System (INIS)
Uyterlinde, M.A.; Rijkers, F.A.M.
1999-12-01
The main objective of the energy conservation model REDUCE (Reduction of Energy Demand by Utilization of Conservation of Energy) is the evaluation of the effectiveness of economical, financial, institutional, and regulatory measures for improving the rational use of energy in end-use sectors. This report presents the results of additional model development activities, partly based on the first experiences in a previous project. Energy efficiency indicators have been added as an extra tool for output analysis in REDUCE. The methodology is described and some examples are given. The model has been extended with a method for modelling the effects of technical development on production costs, by means of an experience curve. Finally, the report provides a 'users guide', by describing in more detail the input data specification as well as all menus and buttons. 19 refs
Energy Efficiency Modelling of Residential Air Source Heat Pump Water Heater
Directory of Open Access Journals (Sweden)
Cong Toan Tran
2016-03-01
Full Text Available The heat pump water heater is one of the most energy efficient technologies for heating water for household use. The present work proposes a simplified model of coefficient of performance and examines its predictive capability. The model is based on polynomial functions where the variables are temperatures and the coefficients are derived from the Australian standard test data, using regression technics. The model enables to estimate the coefficient of performance of the same heat pump water heater under other test standards (i.e. US, Japanese, European and Korean standards. The resulting estimations over a heat-up phase and a full test cycle including a draw off pattern are in close agreement with the measured data. Thus the model allows manufacturers to avoid the need to carry out physical tests for some standards and to reduce product cost. The limitations of the methodology proposed are also discussed.
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Christian Appold
2010-06-01
Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DEFF Research Database (Denmark)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
2017-01-01
orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME......Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many...... models have 70,000 constraints and variables and will grow larger). We have developed a quadrupleprecision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging...
An Efficient Technique for Bayesian Modelling of Family Data Using the BUGS software
Directory of Open Access Journals (Sweden)
Harold T Bae
2014-11-01
Full Text Available Linear mixed models have become a popular tool to analyze continuous data from family-based designs by using random effects that model the correlation of subjects from the same family. However, mixed models for family data are challenging to implement with the BUGS (Bayesian inference Using Gibbs Sampling software because of the high-dimensional covariance matrix of the random effects. This paper describes an efficient parameterization that utilizes the singular value decomposition of the covariance matrix of random effects, includes the BUGS code for such implementation, and extends the parameterization to generalized linear mixed models. The implementation is evaluated using simulated data and an example from a large family-based study is presented with a comparison to other existing methods.
Efficient Modeling of Laser-Plasma Accelerators with INF&RNO
Energy Technology Data Exchange (ETDEWEB)
Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.
2010-06-01
The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.
Crude oil market efficiency and modeling. Insights from the multiscaling autocorrelation pattern
International Nuclear Information System (INIS)
Alvarez-Ramirez, Jose; Alvarez, Jesus; Solis, Ricardo
2010-01-01
Empirical research on market inefficiencies focuses on the detection of autocorrelations in price time series. In the case of crude oil markets, statistical support is claimed for weak efficiency over a wide range of time-scales. However, the results are still controversial since theoretical arguments point to deviations from efficiency as prices tend to revert towards an equilibrium path. This paper studies the efficiency of crude oil markets by using lagged detrended fluctuation analysis (DFA) to detect delay effects in price autocorrelations quantified in terms of a multiscaling Hurst exponent (i.e., autocorrelations are dependent of the time scale). Results based on spot price data for the period 1986-2009 indicate important deviations from efficiency associated to lagged autocorrelations, so imposing the random walk for crude oil prices has pronounced costs for forecasting. Evidences in favor of price reversion to a continuously evolving mean underscores the importance of adequately incorporating delay effects and multiscaling behavior in the modeling of crude oil price dynamics. (author)
Crude oil market efficiency and modeling. Insights from the multiscaling autocorrelation pattern
Energy Technology Data Exchange (ETDEWEB)
Alvarez-Ramirez, Jose [Departamento de Ingenieria de Procesos e Hidraulica, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Departamento de Economia, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Alvarez, Jesus [Departamento de Ingenieria de Procesos e Hidraulica, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico); Solis, Ricardo [Departamento de Economia, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico D.F., 09340 (Mexico)
2010-09-15
Empirical research on market inefficiencies focuses on the detection of autocorrelations in price time series. In the case of crude oil markets, statistical support is claimed for weak efficiency over a wide range of time-scales. However, the results are still controversial since theoretical arguments point to deviations from efficiency as prices tend to revert towards an equilibrium path. This paper studies the efficiency of crude oil markets by using lagged detrended fluctuation analysis (DFA) to detect delay effects in price autocorrelations quantified in terms of a multiscaling Hurst exponent (i.e., autocorrelations are dependent of the time scale). Results based on spot price data for the period 1986-2009 indicate important deviations from efficiency associated to lagged autocorrelations, so imposing the random walk for crude oil prices has pronounced costs for forecasting. Evidences in favor of price reversion to a continuously evolving mean underscores the importance of adequately incorporating delay effects and multiscaling behavior in the modeling of crude oil price dynamics. (author)
A tool for efficient, model-independent management optimization under uncertainty
White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.
2018-01-01
To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.
Stochastic modeling of soundtrack for efficient segmentation and indexing of video
Naphade, Milind R.; Huang, Thomas S.
1999-12-01
Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure
Personalization of models with many model parameters : an efficient sensitivity analysis approach
Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.
2015-01-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of
Service-Aware Clustering: An Energy-Efficient Model for the Internet-of-Things.
Bagula, Antoine; Abidoye, Ademola Philip; Zodi, Guy-Alain Lusilao
2015-12-23
Current generation wireless sensor routing algorithms and protocols have been designed based on a myopic routing approach, where the motes are assumed to have the same sensing and communication capabilities. Myopic routing is not a natural fit for the IoT, as it may lead to energy imbalance and subsequent short-lived sensor networks, routing the sensor readings over the most service-intensive sensor nodes, while leaving the least active nodes idle. This paper revisits the issue of energy efficiency in sensor networks to propose a clustering model where sensor devices' service delivery is mapped into an energy awareness model, used to design a clustering algorithm that finds service-aware clustering (SAC) configurations in IoT settings. The performance evaluation reveals the relative energy efficiency of the proposed SAC algorithm compared to related routing algorithms in terms of energy consumption, the sensor nodes' life span and its traffic engineering efficiency in terms of throughput and delay. These include the well-known low energy adaptive clustering hierarchy (LEACH) and LEACH-centralized (LEACH-C) algorithms, as well as the most recent algorithms, such as DECSA and MOCRN.
Service-Aware Clustering: An Energy-Efficient Model for the Internet-of-Things
Directory of Open Access Journals (Sweden)
Antoine Bagula
2015-12-01
Full Text Available Current generation wireless sensor routing algorithms and protocols have been designed based on a myopic routing approach, where the motes are assumed to have the same sensing and communication capabilities. Myopic routing is not a natural fit for the IoT, as it may lead to energy imbalance and subsequent short-lived sensor networks, routing the sensor readings over the most service-intensive sensor nodes, while leaving the least active nodes idle. This paper revisits the issue of energy efficiency in sensor networks to propose a clustering model where sensor devices’ service delivery is mapped into an energy awareness model, used to design a clustering algorithm that finds service-aware clustering (SAC configurations in IoT settings. The performance evaluation reveals the relative energy efficiency of the proposed SAC algorithm compared to related routing algorithms in terms of energy consumption, the sensor nodes’ life span and its traffic engineering efficiency in terms of throughput and delay. These include the well-known low energy adaptive clustering hierarchy (LEACH and LEACH-centralized (LEACH-C algorithms, as well as the most recent algorithms, such as DECSA and MOCRN.
A model for the design and evaluation of energy efficient northern housing
Energy Technology Data Exchange (ETDEWEB)
Semple, B. [Canada Mortgage and Housing Corp., Ottawa, ON (Canada); Dragnea, M. [Nunavut Housing Corp., Iqaluit, NU (Canada)
2007-07-01
The Nunavut Housing Corporation (NHC) is responsible for improving the housing and living conditions in communities throughout Nunavut. In response to the current housing shortage, NHC recently initiated a joint project with Canada Mortgage and Housing (CMHC) to develop a sustainable solution to housing in the Arctic. The result was a new energy efficient row house design called the NHC 5-Plex which has become a model for a culturally responsible housing design that meets the needs of Inuit families. This paper presented the design and development process undertaken for this project. It also described how the integration of new construction materials and products designed for cold climates, strategic project management, and cooperation between the NHC and CMHC were used to address the goals of energy efficiency and affordability for Arctic housing. Energy conservation calculations were presented along with the comparative and cost analysis used for the project. The energy modelling of the building performance was undertaken through a technical and design review of windows and insulating materials for energy efficiency, flame ratings, combustibility, sound transmission and water permeability. The heating and ventilation strategies were also evaluated in an effort to address issues of indoor air quality. It was concluded that in order to address the challenge of rising energy costs, the R value in the NHC 5-plex should be increased and the energy savings should be monitored. 3 refs., 6 figs.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Directory of Open Access Journals (Sweden)
Thanh Dinh
2016-06-01
Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.
Can diversity in root architecture explain plant water use efficiency? A modeling study.
Tron, Stefania; Bodner, Gernot; Laio, Francesco; Ridolfi, Luca; Leitner, Daniel
2015-09-24
Drought stress is a dominant constraint to crop production. Breeding crops with adapted root systems for effective uptake of water represents a novel strategy to increase crop drought resistance. Due to complex interaction between root traits and high diversity of hydrological conditions, modeling provides important information for trait based selection. In this work we use a root architecture model combined with a soil-hydrological model to analyze whether there is a root system ideotype of general adaptation to drought or water uptake efficiency of root systems is a function of specific hydrological conditions. This was done by modeling transpiration of 48 root architectures in 16 drought scenarios with distinct soil textures, rainfall distributions, and initial soil moisture availability. We find that the efficiency in water uptake of root architecture is strictly dependent on the hydrological scenario. Even dense and deep root systems are not superior in water uptake under all hydrological scenarios. Our results demonstrate that mere architectural description is insufficient to find root systems of optimum functionality. We find that in environments with sufficient rainfall before the growing season, root depth represents the key trait for the exploration of stored water, especially in fine soils. Root density, instead, especially near the soil surface, becomes the most relevant trait for exploiting soil moisture when plant water supply is mainly provided by rainfall events during the root system development. We therefore concluded that trait based root breeding has to consider root systems with specific adaptation to the hydrology of the target environment.
The development of furrower model blade to paddlewheel aerator for improving aeration efficiency
Bahri, Samsul; Praeko Agus Setiawan, Radite; Hermawan, Wawan; Zairin Junior, Muhammad
2018-05-01
The successful of intensive aquaculture is strongly influenced by the ability of the farmers to overcome the deterioration of water quality. The problem is low dissolved oxygen through aeration process. The aerator device which widely used in pond farming is paddle wheel aerator because it is the best aerator in aeration mechanism and usable driven power. However, this aerator still has a low performance of aeration, so that the cost of aerator operational for aquaculture is still high. Up to now, the effort to improve the performance of aeration was made by two-dimensional blade design. Obviously, it does not provide the optimum result due to the power requirements for aeration is directly proportional to the increase of aeration rate. The aim of this research is to develop three-dimensional model furrowed blades. Design of Furrower model blades was 1.6 cm diameter hole, 45º of vertical angle blade position and 30º of the horizontal position. The optimum performance furrowed model blades operated on the submerged blade 9 cm with 567.54 Watt of electrical power consumption and 4.322 m3 of splash coverage volume. The standard efficiency aeration is 2.72 kg O2 kWh-1. The furrowed model blades can improve the aeration efficiency of paddlewheel aerator.
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
Riaz, Faisal; Niazi, Muaz A
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
Niazi, Muaz A.
2017-01-01
This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294
Directory of Open Access Journals (Sweden)
Faisal Riaz
Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.
The importance of radiation for semiempirical water-use efficiency models
Boese, Sven; Jung, Martin; Carvalhais, Nuno; Reichstein, Markus
2017-06-01
Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that this intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39-47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.
Energy Technology Data Exchange (ETDEWEB)
Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
An Efficient Explicit-time Description Method for Timed Model Checking
Directory of Open Access Journals (Sweden)
Hao Wang
2009-12-01
Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.
Utilizing Visual Effects Software for Efficient and Flexible Isostatic Adjustment Modelling
Meldgaard, A.; Nielsen, L.; Iaffaldano, G.
2017-12-01
The isostatic adjustment signal generated by transient ice sheet loading is an important indicator of past ice sheet extent and the rheological constitution of the interior of the Earth. Finite element modelling has proved to be a very useful tool in these studies. We present a simple numerical model for 3D visco elastic Earth deformation and a new approach to the design of such models utilizing visual effects software designed for the film and game industry. The software package Houdini offers an assortment of optimized tools and libraries which greatly facilitate the creation of efficient numerical algorithms. In particular, we make use of Houdini's procedural work flow, the SIMD programming language VEX, Houdini's sparse matrix creation and inversion libraries, an inbuilt tetrahedralizer for grid creation, and the user interface, which facilitates effortless manipulation of 3D geometry. We mitigate many of the time consuming steps associated with the authoring of efficient algorithms from scratch while still keeping the flexibility that may be lost with the use of commercial dedicated finite element programs. We test the efficiency of the algorithm by comparing simulation times with off-the-shelf solutions from the Abaqus software package. The algorithm is tailored for the study of local isostatic adjustment patterns, in close vicinity to present ice sheet margins. In particular, we wish to examine possible causes for the considerable spatial differences in the uplift magnitude which are apparent from field observations in these areas. Such features, with spatial scales of tens of kilometres, are not resolvable with current global isostatic adjustment models, and may require the inclusion of local topographic features. We use the presented algorithm to study a near field area where field observations are abundant, namely, Disko Bay in West Greenland with the intention of constraining Earth parameters and ice thickness. In addition, we assess how local
International Nuclear Information System (INIS)
Su, Yan; Chan, Lai-Cheong; Shu, Lianjie; Tsui, Kwok-Leung
2012-01-01
Highlights: ► We develop online prediction models for solar photovoltaic system performance. ► The proposed prediction models are simple but with reasonable accuracy. ► The maximum monthly average minutely efficiency varies 10.81–12.63%. ► The average efficiency tends to be slightly higher in winter months. - Abstract: This paper develops new real time prediction models for output power and energy efficiency of solar photovoltaic (PV) systems. These models were validated using measured data of a grid-connected solar PV system in Macau. Both time frames based on yearly average and monthly average are considered. It is shown that the prediction model for the yearly/monthly average of the minutely output power fits the measured data very well with high value of R 2 . The online prediction model for system efficiency is based on the ratio of the predicted output power to the predicted solar irradiance. This ratio model is shown to be able to fit the intermediate phase (9 am to 4 pm) very well but not accurate for the growth and decay phases where the system efficiency is near zero. However, it can still serve as a useful purpose for practitioners as most PV systems work in the most efficient manner over this period. It is shown that the maximum monthly average minutely efficiency varies over a small range of 10.81% to 12.63% in different months with slightly higher efficiency in winter months.
Efficient n-gram, Skipgram and Flexgram Modelling with Colibri Core
Directory of Open Access Journals (Sweden)
Maarten van Gompel
2016-08-01
Full Text Available Counting n-grams lies at the core of any frequentist corpus analysis and is often considered a trivial matter. Going beyond consecutive n-grams to patterns such as skipgrams and flexgrams increases the demand for efficient solutions. The need to operate on big corpus data does so even more. Lossless compression and non-trivial algorithms are needed to lower the memory demands, yet retain good speed. Colibri Core is software for the efficient computation and querying of n-grams, skipgrams and flexgrams from corpus data. The resulting pattern models can be analysed and compared in various ways. The software offers a programming library for C++ and Python, as well as command-line tools.
Efficient solution of ordinary differential equations modeling electrical activity in cardiac cells.
Sundnes, J; Lines, G T; Tveito, A
2001-08-01
The contraction of the heart is preceded and caused by a cellular electro-chemical reaction, causing an electrical field to be generated. Performing realistic computer simulations of this process involves solving a set of partial differential equations, as well as a large number of ordinary differential equations (ODEs) characterizing the reactive behavior of the cardiac tissue. Experiments have shown that the solution of the ODEs contribute significantly to the total work of a simulation, and there is thus a strong need to utilize efficient solution methods for this part of the problem. This paper presents how an efficient implicit Runge-Kutta method may be adapted to solve a complicated cardiac cell model consisting of 31 ODEs, and how this solver may be coupled to a set of PDE solvers to provide complete simulations of the electrical activity.
COMPARATIVE EFFICIENCIES STUDY OF SLOT MODEL AND MOUSE MODEL IN PRESSURISED PIPE FLOW
Directory of Open Access Journals (Sweden)
Saroj K. Pandit
2014-01-01
Full Text Available The flow in sewers is unsteady and variable between free-surfac e to full pipe pressurized flow. Sewers are designed on the basis of free surf ace flow (gravity flow however they may carry pressurized flow. Preissmann Slot concep t is widely used numerical approach in unsteady free surface-pressurized flow as it provides the advantage of using free surface flow as a single type flow. Slo t concept uses the Saint- Venant’s equations as a basic equation for one-dimensional unst eady free surface flow. This paper includes two different numerical models using Saint Venant’s equations. The Saint Venant’s e quations of continuity and momen tum are solved by the Method of Characteristics and presented in forms for direct substitution into FORTRAN programming for numerical analysis in the first model. The MOUSE model carries out computation of unsteady flows which is founde d on an implicit, finite difference numerical solut ion of the basic one dimension al Saint Venant’s equations of free surface flow. The simulation results are comp ared to analyze the nature and degree of errors for further improvement.
Directory of Open Access Journals (Sweden)
Gimazov Ruslan
2018-01-01
Full Text Available The paper considers the issue of supplying autonomous robots by solar batteries. Low efficiency of modern solar batteries is a critical issue for the whole industry of renewable energy. The urgency of solving the problem of improved energy efficiency of solar batteries for supplying the robotic system is linked with the task of maximizing autonomous operation time. Several methods to improve the energy efficiency of solar batteries exist. The use of MPPT charge controller is one these methods. MPPT technology allows increasing the power generated by the solar battery by 15 – 30%. The most common MPPT algorithm is the perturbation and observation algorithm. This algorithm has several disadvantages, such as power fluctuation and the fixed time of the maximum power point tracking. These problems can be solved by using a sufficiently accurate predictive and adaptive algorithm. In order to improve the efficiency of solar batteries, autonomous power supply system was developed, which included an intelligent MPPT charge controller with the fuzzy logic-based perturbation and observation algorithm. To study the implementation of the fuzzy logic apparatus in the MPPT algorithm, in Matlab/Simulink environment, we developed a simulation model of the system, including solar battery, MPPT controller, accumulator and load. Results of the simulation modeling established that the use of MPPT technology had increased energy production by 23%; introduction of the fuzzy logic algorithm to MPPT controller had greatly increased the speed of the maximum power point tracking and neutralized the voltage fluctuations, which in turn reduced the power underproduction by 2%.
Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl
This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...
Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model
Directory of Open Access Journals (Sweden)
Xing He
2008-01-01
Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.
Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks
International Nuclear Information System (INIS)
Naimi, Yaghoob; Naimi, Mohammad
2013-01-01
We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)
An efficient binomial model-based measure for sequence comparison and its application.
Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong
2011-04-01
Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.
Shikata, Masahito; Ezura, Hiroshi
2016-01-01
Tomato is a model plant for fruit development, a unique feature that classical model plants such as Arabidopsis and rice do not have. The tomato genome was sequenced in 2012 and tomato is becoming very popular as an alternative system for plant research. Among many varieties of tomato, Micro-Tom has been recognized as a model cultivar for tomato research because it shares some key advantages with Arabidopsis including its small size, short life cycle, and capacity to grow under fluorescent lights at a high density. Mutants and transgenic plants are essential materials for functional genomics research, and therefore, the availability of mutant resources and methods for genetic transformation are key tools to facilitate tomato research. Here, we introduce the Micro-Tom mutant database "TOMATOMA" and an efficient transformation protocol for Micro-Tom.
Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.
Knudsen, Thomas; Aasbjerg Nielsen, Allan
2013-04-01
The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream
Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex
International Nuclear Information System (INIS)
Gregory, Michael V.; Paul, Pran K.
2000-01-01
An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Madani, N.; Kimball, J. S.; Running, S. W.
2014-12-01
Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.
An efficient model to improve the performance of platelet inventory of the blood banks
Directory of Open Access Journals (Sweden)
Annista Wijayanayake
2017-06-01
Full Text Available Platelet transfusions are vital for the prevention of fatal hemorrhage. Therefore, a stable inventory of platelets is required for an efficient and effective delivery of services in all the hospitals and medical centers. However, over the past decades, the requirement for platelets seems to be continuously increasing, while the number of potential donors is decreasing. Moreover, due to its very short life span of just five days, a large volume of platelets expires while they are on the shelves, resulting unnecessary shortages of platelets. Furthermore, it is very costly and difficult to get platelets from another blood bank in a short notice. Hence, these unexpected shortages put the life of patients at risk. This study is focused on addressing the issues discussed, by developing an efficient blood inventory management model to reduce the platelet shortages, and wastages, while reducing the related inventory costs. Currently, the blood banks are managing platelet inventory according to their own instincts, which result to shortages and wastages. As a solution, we propose a model to manage the daily supply of platelets by forecasting the daily demand. Three different algorithms were developed using lower bound, average and upper bound values and tested to find the optimal solution that best fits to manage platelet inventory. These models were tested using data for 60 days obtained from two different levels of blood banks in Sri Lanka, namely a General Hospital blood bank and a Base Hospital blood bank. In General hospitals, the demand for blood components including platelets is very high when compared to the Base hospitals. The study was able to come up with two different inventory management models for the two different types of blood banks. The model that best fits the General Hospital blood bank where the demand is high and was able to reduce the shortages by 46.74%, wastage by 89.82% and total inventory level by 39.10% and, the model that
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos, E-mail: danielgonro@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la, E-mail: lgarcia@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba); Sanchez, Danny [Universidade Estadual de Santa Cruz (UESC), Ilheus, BA (Brazil)
2015-07-01
High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)
International Nuclear Information System (INIS)
Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos; Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la; Sanchez, Danny
2015-01-01
High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity
Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott
2008-01-01
The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic
International Nuclear Information System (INIS)
Comodi, Gabriele; Cioccolanti, Luca; Renzi, Massimiliano
2014-01-01
This study investigates the potential of energy efficiency, renewables, and micro-cogeneration to reduce household consumption in a medium Italian town and analyses the scope for municipal local policies. The study also investigates the effects of tourist flows on town's energy consumption by modelling energy scenarios for permanent and summer homes. Two long-term energy scenarios (to 2030) were modelled using the MarkAL-TIMES generator model: BAU (business as usual), which is the reference scenario, and EHS (exemplary household sector), which involves targets of penetration for renewables and micro-cogeneration. The analysis demonstrated the critical role of end-use energy efficiency in curbing residential consumption. Cogeneration and renewables (PV (photovoltaic) and solar thermal panels) were proven to be valuable solutions to reduce the energetic and environmental burden of the household sector (−20% in 2030). Because most of household energy demand is ascribable to space-heating or hot water production, this study finds that micro-CHP technologies with lower power-to-heat ratios (mainly, Stirling engines and microturbines) show a higher diffusion, as do solar thermal devices. The spread of micro-cogeneration implies a global reduction of primary energy but involves the internalisation of the primary energy, and consequently CO 2 emissions, previously consumed in a centralised power plant within the municipality boundaries. - Highlights: • Energy consumption in permanent homes can be reduced by 20% in 2030. • High efficiency appliances have different effect according to their market penetration. • Use of electrical heat pumps shift consumption from natural gas to electricity. • Micro-CHP entails a global reduction of energy consumption but greater local emissions. • The main CHP technologies entering the residential market are Stirling and μ-turbines
Hasegawa, Takanori; Nagasaki, Masao; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2014-07-01
Recently, several biological simulation models of, e.g., gene regulatory networks and metabolic pathways, have been constructed based on existing knowledge of biomolecular reactions, e.g., DNA-protein and protein-protein interactions. However, since these do not always contain all necessary molecules and reactions, their simulation results can be inconsistent with observational data. Therefore, improvements in such simulation models are urgently required. A previously reported method created multiple candidate simulation models by partially modifying existing models. However, this approach was computationally costly and could not handle a large number of candidates that are required to find models whose simulation results are highly consistent with the data. In order to overcome the problem, we focused on the fact that the qualitative dynamics of simulation models are highly similar if they share a certain amount of regulatory structures. This indicates that better fitting candidates tend to share the basic regulatory structure of the best fitting candidate, which can best predict the data among candidates. Thus, instead of evaluating all candidates, we propose an efficient explorative method that can selectively and sequentially evaluate candidates based on the similarity of their regulatory structures. Furthermore, in estimating the parameter values of a candidate, e.g., synthesis and degradation rates of mRNA, for the data, those of the previously evaluated candidates can be utilized. The method is applied here to the pharmacogenomic pathways for corticosteroids in rats, using time-series microarray expression data. In the performance test, we succeeded in obtaining more than 80% of consistent solutions within 15% of the computational time as compared to the comprehensive evaluation. Then, we applied this approach to 142 literature-recorded simulation models of corticosteroid-induced genes, and consequently selected 134 newly constructed better models. The
Investigations of the efficiency of enzyme production technologies using modelling tools
DEFF Research Database (Denmark)
Albæk, Mads Orla; Gernaey, Krist; Hansen, Morten Skov
Growing markets and new innovative applications of industrial enzymes leads to increased interest in efficient production of these products. Most industrial enzymes are currently produced in traditional stirred tank reactors in submerged fed batch culture. The limiting parameter in such processes...... fermentations of the filamentous fungus Trichoderma reesei in 550litre pilot scale stirred tank reactors for a range of process conditions. Based on the experimental data a process model has been created, which satisfactory simulates the effect of the changing process conditions: Aeration rate, agitation speed...
International Nuclear Information System (INIS)
Galiano, G.; Grau, A.
1994-01-01
An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electro capture in the counting efficiency when the atomic number of the nuclide is high. (Author)
Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun
2017-12-01
This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.
A branch-heterogeneous model of protein evolution for efficient inference of ancestral sequences.
Groussin, M; Boussau, B; Gouy, M
2013-07-01
Most models of nucleotide or amino acid substitution used in phylogenetic studies assume that the evolutionary process has been homogeneous across lineages and that composition of nucleotides or amino acids has remained the same throughout the tree. These oversimplified assumptions are refuted by the observation that compositional variability characterizes extant biological sequences. Branch-heterogeneous models of protein evolution that account for compositional variability have been developed, but are not yet in common use because of the large number of parameters required, leading to high computational costs and potential overparameterization. Here, we present a new branch-nonhomogeneous and nonstationary model of protein evolution that captures more accurately the high complexity of sequence evolution. This model, henceforth called Correspondence and likelihood analysis (COaLA), makes use of a correspondence analysis to reduce the number of parameters to be optimized through maximum likelihood, focusing on most of the compositional variation observed in the data. The model was thoroughly tested on both simulated and biological data sets to show its high performance in terms of data fitting and CPU time. COaLA efficiently estimates ancestral amino acid frequencies and sequences, making it relevant for studies aiming at reconstructing and resurrecting ancestral amino acid sequences. Finally, we applied COaLA on a concatenate of universal amino acid sequences to confirm previous results obtained with a nonhomogeneous Bayesian model regarding the early pattern of adaptation to optimal growth temperature, supporting the mesophilic nature of the Last Universal Common Ancestor.
Energy Technology Data Exchange (ETDEWEB)
Privette, J.L.
1994-12-31
The angular distribution of radiation scattered by the earth surface contains information on the structural and optical properties of the surface. Potentially, this information may be retrieved through the inversion of surface bidirectional reflectance distribution function (BRDF) models. This report details the limitations and efficient application of BRDF model inversions using data from ground- and satellite-based sensors. A turbid medium BRDF model, based on the discrete ordinates solution to the transport equation, was used to quantify the sensitivity of top-of-canopy reflectance to vegetation and soil parameters. Results were used to define parameter sets for inversions. Using synthetic reflectance values, the invertibility of the model was investigated for different optimization algorithms, surface and sampling conditions. Inversions were also conducted with field data from a ground-based radiometer. First, a soil BRDF model was inverted for different soil and sampling conditions. A condition-invariant solution was determined and used as the lower boundary condition in canopy model inversions. Finally, a scheme was developed to improve the speed and accuracy of inversions.
Energy Technology Data Exchange (ETDEWEB)
Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew
2006-01-01
Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and
Directory of Open Access Journals (Sweden)
H. Wan
2014-09-01
Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of
Energy Technology Data Exchange (ETDEWEB)
Papadopoulos, Alessandro Vittorio, E-mail: alessandro.papadopoulos@control.lth.se [Lund University, Department of Automatic Control (Sweden); Leva, Alberto, E-mail: alberto.leva@polimi.it [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (Italy)
2015-06-15
The presence of different time scales in a dynamic model significantly hampers the efficiency of its simulation. In multibody systems the fact is particularly relevant, as the mentioned time scales may be very different, due, for example, to the coexistence of mechanical components controled by electronic drive units, and may also appear in conjunction with significant nonlinearities. This paper proposes a systematic technique, based on the principles of dynamic decoupling, to partition a model based on the time scales that are relevant for the particular simulation studies to be performed and as transparently as possible for the user. In accordance with said purpose, peculiar to the technique is its neat separation into two parts: a structural analysis of the model, which is general with respect to any possible simulation scenario, and a subsequent decoupled integration, which can conversely be (easily) tailored to the study at hand. Also, since the technique does not aim at reducing but rather at partitioning the model, the state space and the physical interpretation of the dynamic variables are inherently preserved. Moreover, the proposed analysis allows us to define some novel indices relative to the separability of the system, thereby extending the idea of “stiffness” in a way that is particularly keen to its use for the improvement of simulation efficiency, be the envisaged integration scheme monolithic, parallel, or even based on cosimulation. Finally, thanks to the way the analysis phase is conceived, the technique is naturally applicable to both linear and nonlinear models. The paper contains a methodological presentation of the proposed technique, which is related to alternatives available in the literature so as to evidence the peculiarities just sketched, and some application examples illustrating the achieved advantages and motivating the major design choice from an operational viewpoint.
Schoolmaster, Donald; Stagg, Camille L.
2018-01-01
A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.
Transport Modeling Analysis to Test the Efficiency of Fish Markets in Oman
Directory of Open Access Journals (Sweden)
Khamis S. Al-Abri
2009-01-01
Full Text Available Oman’s fish exports have shown an increasing trend while supplies to the domestic market have declined, despite increased domestic demand caused by population growth and income. This study hypothesized that declining fish supplies to domestic markets were due to inefficiency of the transport function of the fish marketing system in Oman. The hypothesis was tested by comparing the observed prices of several fish species at several markets with optimal prices. The optimal prices were estimated by the dual of a fish transport cost- minimizing linear programming model. Primary data on market prices and transportation costs and quantities transported were gathered through a survey of a sample of fish transporters. The quantity demanded at market sites was estimated using secondary data. The analysis indicated that the differences between the observed prices and the estimated optimal prices were not significantly different showing that the transport function of fish markets in Oman is efficient. This implies that the increasing trend of fish exports vis-à-vis the decreasing trend of supplies to domestic markets is rational and will continue. This may not be considered to be equitable but it is efficient and may have long-term implications for national food security and have an adverse impact on the nutritional and health status of the rural poor population. Policy makers may have to recognize the trade off between the efficiency and equity implications of the fish markets in Oman and make policy decisions accordingly in order to ensure national food security.
Particle capture efficiency in a multi-wire model for high gradient magnetic separation
Eisenträger, Almut
2014-07-21
High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle\\'s entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separation cycles may increase efficiency. © 2014 AIP Publishing LLC.
Efficient Use of Video for 3d Modelling of Cultural Heritage Objects
Alsadik, B.; Gerke, M.; Vosselman, G.
2015-03-01
Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS
Directory of Open Access Journals (Sweden)
B. Alsadik
2015-03-01
Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.
Efficient estimation of the robustness region of biological models with oscillatory behavior.
Directory of Open Access Journals (Sweden)
Mochamad Apri
Full Text Available Robustness is an essential feature of biological systems, and any mathematical model that describes such a system should reflect this feature. Especially, persistence of oscillatory behavior is an important issue. A benchmark model for this phenomenon is the Laub-Loomis model, a nonlinear model for cAMP oscillations in Dictyostelium discoideum. This model captures the most important features of biomolecular networks oscillating at constant frequencies. Nevertheless, the robustness of its oscillatory behavior is not yet fully understood. Given a system that exhibits oscillating behavior for some set of parameters, the central question of robustness is how far the parameters may be changed, such that the qualitative behavior does not change. The determination of such a "robustness region" in parameter space is an intricate task. If the number of parameters is high, it may be also time consuming. In the literature, several methods are proposed that partially tackle this problem. For example, some methods only detect particular bifurcations, or only find a relatively small box-shaped estimate for an irregularly shaped robustness region. Here, we present an approach that is much more general, and is especially designed to be efficient for systems with a large number of parameters. As an illustration, we apply the method first to a well understood low-dimensional system, the Rosenzweig-MacArthur model. This is a predator-prey model featuring satiation of the predator. It has only two parameters and its bifurcation diagram is available in the literature. We find a good agreement with the existing knowledge about this model. When we apply the new method to the high dimensional Laub-Loomis model, we obtain a much larger robustness region than reported earlier in the literature. This clearly demonstrates the power of our method. From the results, we conclude that the biological system underlying is much more robust than was realized until now.
Efficient occupancy model-fitting for extensive citizen-science data
Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.
2017-01-01
Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen
A hybrid model for the computationally-efficient simulation of the cerebellar granular layer
Directory of Open Access Journals (Sweden)
Anna eCattani
2016-04-01
Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.
International Nuclear Information System (INIS)
Pantic, Lana S.; Pavlović, Tomislav M.; Milosavljević, Dragana D.; Radonjic, Ivana S.; Radovic, Miodrag K.; Sazhko, Galina
2016-01-01
Five different models for calculating solar module temperature, output power and efficiency for sunny days with different solar radiation intensities and ambient temperatures are assessed in this paper. Thereafter, modeled values are compared to the experimentally obtained values for the horizontal solar module in Nis, Serbia. The criterion for determining the best model was based on the statistical analysis and the agreement between the calculated and the experimental values. The calculated values of solar module temperature are in good agreement with the experimentally obtained ones, with some variations over and under the measured values. The best agreement between calculated and experimentally obtained values was for summer months with high solar radiation intensity. The nonlinear model for calculating the output power is much better than the linear model and at the same time predicts better the total electrical energy generated by the solar module during the day. The nonlinear model for calculating the solar module efficiency predicts the efficiency higher than the STC (Standard Test Conditions) value of solar module efficiency for all conditions, while the linear model predicts the solar module efficiency very well. This paper provides a simple and efficient guideline to estimate relevant parameters of a monocrystalline silicon solar module under the moderate-continental climate conditions. - Highlights: • Linear model for solar module temperature gives accurate predictions for August. • The nonlinear model better predicts the solar module power than the linear model. • For calculating solar module power for Nis we propose the nonlinear model. • For calculating solar model efficiency for Nis we propose adoption of linear model. • The adopted models can be used for calculations throughout the year.
Directory of Open Access Journals (Sweden)
A. S. Laskin
2015-01-01
Full Text Available The article presents the results of numerical investigation of kinetic energy (KE loss and blading efficiency of the single-stage axial turbine under different operating conditions, characterized by the ratio u/C0. The calculations are performed by stationary (Stage method and nonstationary (Transient method methods using ANSYS CFX. The novelty of this work lies in the fact that the numerical simulation of steady and unsteady flows in a turbine stage is conducted, and the results are obtained to determine the loss of KE, both separately by the elements of the flow range and their total values, in the stage efficiency as well. The results obtained are compared with the calculated efficiency according to one-dimensional theory.To solve these problems was selected model of axial turbine stage with D/l = 13, blade profiles of rotor and stator of constant cross-section, similar to tested ones in inverted turbine when = 0.3. The degree of reactivity ρ = 0.27, the rotor speed was varied within the range 1000 ÷ 1800 rev/min.Results obtained allow us to draw the following conclusions:1. The level of averaged coefficients of total KE losses in the range of from 0.48 to 0.75 is from 18% to 21% when calculating by the Stage method and from 21% to 25% by the Transient one.2. The level of averaged coefficients of KE losses with the output speed of in the specified range is from 9% to 13%, and almost the same when in calculating by Stage and Transient methods.3. Levels of averaged coefficients of KE loss in blade tips (relative to the differential enthalpies per stage are changed in the range: from 4% to 3% (Stage and are stored to be equal to 5% (Transient; from 5% to 6% (Stage and from 6% to 8% (Transient.4. Coefficients of KE losses in blade tips GV and RB are higher in calculations of the model stage using the Transient method than the Stage one, respectively, by = 1.5 ÷ 2.5% and = 4 ÷ 5% of the absolute values. These are values to characterize the KE
Amir, Sahar Z.
2017-06-09
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Amir, Sahar Z.; Chen, Huangxin; Sun, Shuyu
2017-01-01
A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.
Energy efficiency and integrated resource planning - lessons drawn from the Californian model
International Nuclear Information System (INIS)
Baudry, P.
2008-01-01
The principle of integrated resource planning (IRP) is to consider, on the same level, investments which aim to produce energy and those which enable energy requirements to be reduced. According to this principle, the energy efficiency programmes, which help to reduce energy demand and CO 2 emissions, are considered as an economically appreciated resource. The costs and gains of this resource are evaluated and compared to those relating to energy production. California has adopted an IRP since 1990 and ranks energy efficiency highest among the available energy resources, since economic evaluations show that the cost of realizing a saving of one kWh is lower than that which corresponds to its production. Yet this energy policy model is not universally widespread over the world. This can be explained by several reasons. Firstly, a reliable economic appreciation of energy savings presupposes that great uncertainties will be raised linked to the measurement of energy savings, which emanates in articular from the different possible options for the choice of base reference. This disinterest for IRP in Europe can also be explained by an institutional context of energy market liberalization which does not promote this type of regulation, as well as by the concern of making energy supply security the policies' top priority. Lastly, the remuneration of economic players investing in the energy efficiency programmes is an indispensable condition for its quantitative recognition in national investment planning. In France, the process of multi-annual investment programming is a mechanism which could lead to energy efficiency being included as a resource with economically appreciated investments. (author)
Ajayi, Saheed O; Oyedele, Lukumon O
2018-05-01
Albeit the understanding that construction waste is caused by activities ranging from all stages of project delivery process, research efforts have been concentrated on design and construction stages, while the possibility of reducing waste through materials procurement process is widely neglected. This study aims at exploring and confirming strategies for achieving waste-efficient materials procurement in construction activities. The study employs sequential exploratory mixed method approach as its methodological framework, using focus group discussion, statistical analysis and structural equation modelling. The study suggests that for materials procurement to enhance waste minimisation in construction projects, the procurement process would be characterised by four features. These include suppliers' commitment to low waste measures, low waste purchase management, effective materials delivery management and waste-efficient Bill of Quantity, all of which have significant impacts on waste minimisation. This implies that commitment of materials suppliers to such measures as take back scheme and flexibility in supplying small materials quantity, among others, are expected of materials procurement. While low waste purchase management stipulates the need for such measures as reduced packaging and consideration of pre-assembled/pre-cut materials, efficient delivery management entails effective delivery and storage system as well as adequate protection of materials during the delivery process, among others. Waste-efficient specification and bill of quantity, on the other hand, requires accurate materials take-off and ordering of materials based on accurately prepared design documents and bill of quantity. Findings of this study could assist in understanding a set of measures that should be taken during materials procurement process, thereby corroborating waste management practices at other stages of project delivery process. Copyright © 2018. Published by Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Wolfgang Witteveen
2014-01-01
Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.
Shi, Wenwu; Pinto, Brian
2017-12-01
Melting and holding molten metals within crucibles accounts for a large portion of total energy demand in the resource-intensive nonferrous foundry industry. Multivariate mathematical modeling aided by detailed material characterization and advancements in crucible technologies can make a significant impact in the areas of cost-efficiency and carbon footprint reduction. Key thermal properties such as conductivity and specific heat capacity were studied to understand their influence on crucible furnace energy consumption during melting and holding processes. The effects of conductivity on thermal stresses and longevity of crucibles were also evaluated. With this information, accurate theoretical models using finite element analysis were developed to study total energy consumption and melting time. By applying these findings to recent crucible developments, considerable improvements in field performance were reported and documented as case studies in applications such as aluminum melting and holding.
Efficient system modeling for a small animal PET scanner with tapered DOI detectors
International Nuclear Information System (INIS)
Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Qi, Jinyi; Rodríguez-Villafuerte, Mercedes
2016-01-01
A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement. (paper)
Modelling efficient innovative work: integration of economic and social psychological approaches
Directory of Open Access Journals (Sweden)
Babanova Yulia
2017-01-01
Full Text Available The article deals with the relevance of integration of economic and social psychological approaches to the solution of enhancing the efficiency of innovation management. The content, features and specifics of the modelling methods within each of approaches are unfolded and options of integration are considered. The economic approach lies in the generation of the integrated matrix concept of management of innovative development of an enterprise in line with the stages of innovative work and the use of the integrated vector method for the evaluation of the innovative enterprise development level. The social psychological approach lies in the development of a system of psychodiagnostic indexes of activity resources within the scope of psychological innovative audit of enterprise management and development of modelling methods for the balance of activity trends. Modelling the activity resources is based on the system of equations accounting for the interaction type of psychodiagnostic indexes. Integration of two approaches includes a methodological level, a level of empirical studies and modelling methods. There are suggested options of integrating the economic and psychological approaches to analyze available material and non-material resources of the enterprises’ innovative work and to forecast an optimal option of development based on the implemented modelling methods.
Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.
Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae
2013-05-15
Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Soohyung Joo
2011-12-01
Full Text Available Purpose – This paper aimed to develop a usability evaluation model and associated survey tool in the context of academic libraries. This study not only proposed a usability evaluation model but also a practical survey tool tailored to academic library websites. Design/methodology – A usability evaluation model has been developed for academic library websites based on literature review and expert consultation. Then, the authors verified the reliability and validity of the usability evaluation model empirically using the survey data from actual users. Statistical analysis, such as descriptive statistics, internal consistency test, and a factor analysis, were applied to ensure both the reliability and validity of the usability evaluation tool. Findings – From the document analysis and expert consultation, this study identified eighteen measurement items to survey the three constructs of the usability, effectiveness, efficiency, and learnability, in academic library websites. The evaluation tool was then validated with regard to data distribution, reliability, and validity. The empirical examination based on 147 actual user responses proved the survey evaluation tool suggested herein is acceptable in assessing academic library website usability. Originality/Value – This research is one of the few studies to engender a practical survey tool in evaluating library website usability. The usability model and corresponding survey tool would be useful for librarians and library administrators in academic libraries who plan to conduct a usability evaluation involving large sample.
Zhang, Li; Chen, Jiasheng; Gao, Chunming; Liu, Chuanmiao; Xu, Kuihua
2018-03-16
Hepatocellular carcinoma (HCC) is a leading cause of cancer-related death worldwide. The early diagnosis of HCC is greatly helpful to achieve long-term disease-free survival. However, HCC is usually difficult to be diagnosed at an early stage. The aim of this study was to create the prediction model to diagnose HCC based on gene expression programming (GEP). GEP is an evolutionary algorithm and a domain-independent problem-solving technique. Clinical data show that six serum biomarkers, including gamma-glutamyl transferase, C-reaction protein, carcinoembryonic antigen, alpha-fetoprotein, carbohydrate antigen 153, and carbohydrate antigen 199, are related to HCC characteristics. In this study, the prediction of HCC was made based on these six biomarkers (195 HCC patients and 215 non-HCC controls) by setting up optimal joint models with GEP. The GEP model discriminated 353 out of 410 subjects, representing a determination coefficient of 86.28% (283/328) and 85.37% (70/82) for training and test sets, respectively. Compared to the results from the support vector machine, the artificial neural network, and the multilayer perceptron, GEP showed a better outcome. The results suggested that GEP modeling was a promising and excellent tool in diagnosis of hepatocellular carcinoma, and it could be widely used in HCC auxiliary diagnosis. Graphical abstract The process to establish an efficient model for auxiliary diagnosis of hepatocellular carcinoma.
International Nuclear Information System (INIS)
Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie
2012-01-01
Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.
Lu Hualiang
2006-01-01
We applied a two-stage value chain model to investigate the effects of input application and occasional transaction costs on vegetable marketing chain efficiencies with a farm household-level data set. In the first stage, the production efficiencies with the combination of resource endowments, capital and managerial inputs, and production techniques were evaluated; then at the second stage, the marketing technical efficiencies were determined under the marketing value of the vegetables for th...
Directory of Open Access Journals (Sweden)
Ramachandran Rakesh
2017-01-01
In this paper, design and implementation of an ultra-high efficiency isolated bi-directional dc-dc converter utilizing GaN devices is presented. Loss modelling of the GaN converter is also included in this paper. The converter has achieved a maximum measured efficiency of 98.8% in both directions of power flow, using the same power components. Hardware prototype of the converter along with the measured efficiency curve is also presented in this paper.
Boano, F; Rizzo, A; Samsó, R; García, J; Revelli, R; Ridolfi, L
2018-01-15
The average organic and hydraulic loads that Constructed Wetlands (CWs) receive are key parameters for their adequate long-term functioning. However, over their lifespan they will inevitably be subject to either episodic or sustained overloadings. Despite that the consequences of sustained overloading are well known (e.g., clogging), the threshold of overloads that these systems can tolerate is difficult to determine. Moreover, the mechanisms that might sustain the buffering capacity (i.e., the reduction of peaks in nutrient load) during overloads are not well understood. The aim of this work is to evaluate the effect of sudden but sustained organic and hydraulic overloads on the general functioning of CWs. To that end, the mathematical model BIO_PORE was used to simulate five different scenarios, based on the features and operation conditions of a pilot CW system: a control simulation representing the average loads; 2 simulations representing +10% and +30% sustained organic overloads; one simulation representing a sustained +30% hydraulic overload; and one simulation with sustained organic and hydraulic overloads of +15% each. Different model outputs (e.g., total bacterial biomass and its spatial distribution, effluent concentrations) were compared among different simulations to evaluate the effects of such operation changes. Results reveal that overloads determine a temporary decrease in removal efficiency before microbial biomass adapts to the new conditions and COD removal efficiency is recovered. Increasing organic overloads cause stronger temporary decreases in COD removal efficiency compared to increasing hydraulic loads. The pace at which clogging develops increases by 10% for each 10% increase on the organic load. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Belošević Srđan V.
2016-01-01
Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in
Directory of Open Access Journals (Sweden)
Jinping Sun
2017-01-01
Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.
Large-scale building energy efficiency retrofit: Concept, model and control
International Nuclear Information System (INIS)
Wu, Zhou; Wang, Bo; Xia, Xiaohua
2016-01-01
BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.
Validation of radiological efficiency model applied for the crops/soils contaminated by radiocaesium
International Nuclear Information System (INIS)
Montero, M.; Vazquez, C.; Moraleda, M.; Claver, F.
2000-01-01
The differences shown in the radiological efficiency applying the same agrochemical interventions on a range of contaminated agricultural scenarios by long-live radionuclides have conducted the radioecological studies to quantify the influence of local characteristics on the soil-to-plant transference. In the framework of the Decision Support Systems for post-accidental environmental restoration, a semi-mechanistic approach has been developed to estimate the soil-to-plant transfer factor from the major properties underlying the bioavailability of radiocaesium in soils and the absorption capacity by the crop. The model describes, for each soil texture class, the effects of time and K status on the transference of radiocaesium to plants. The approach lets to estimate the actual and the available minimum transference and to calculate the optimum amendment warranting the maximum radiological efficiency on an specific soil-crop combination. The parameterization and validation of the model from a database providing information about experimental transference studies for a collection of soil-crop combinations are shown. (Author) 4 refs
A numerically efficient damping model for acoustic resonances in microfluidic cavities
Energy Technology Data Exchange (ETDEWEB)
Hahn, P., E-mail: hahnp@ethz.ch; Dual, J. [Institute of Mechanical Systems (IMES), Department of Mechanical and Process Engineering, ETH Zurich, Tannenstrasse 3, CH-8092 Zurich (Switzerland)
2015-06-15
Bulk acoustic wave devices are typically operated in a resonant state to achieve enhanced acoustic amplitudes and high acoustofluidic forces for the manipulation of microparticles. Among other loss mechanisms related to the structural parts of acoustofluidic devices, damping in the fluidic cavity is a crucial factor that limits the attainable acoustic amplitudes. In the analytical part of this study, we quantify all relevant loss mechanisms related to the fluid inside acoustofluidic micro-devices. Subsequently, a numerical analysis of the time-harmonic visco-acoustic and thermo-visco-acoustic equations is carried out to verify the analytical results for 2D and 3D examples. The damping results are fitted into the framework of classical linear acoustics to set up a numerically efficient device model. For this purpose, all damping effects are combined into an acoustofluidic loss factor. Since some components of the acoustofluidic loss factor depend on the acoustic mode shape in the fluid cavity, we propose a two-step simulation procedure. In the first step, the loss factors are deduced from the simulated mode shape. Subsequently, a second simulation is invoked, taking all losses into account. Owing to its computational efficiency, the presented numerical device model is of great relevance for the simulation of acoustofluidic particle manipulation by means of acoustic radiation forces or acoustic streaming. For the first time, accurate 3D simulations of realistic micro-devices for the quantitative prediction of pressure amplitudes and the related acoustofluidic forces become feasible.
Modeling the efficiency of a magnetic needle for collecting magnetic cells
International Nuclear Information System (INIS)
Butler, Kimberly S; Lovato, Debbie M; Larson, Richard S; Adolphi, Natalie L; Bryant, H C; Flynn, Edward R
2014-01-01
As new magnetic nanoparticle-based technologies are developed and new target cells are identified, there is a critical need to understand the features important for magnetic isolation of specific cells in fluids, an increasingly important tool in disease research and diagnosis. To investigate magnetic cell collection, cell-sized spherical microparticles, coated with superparamagnetic nanoparticles, were suspended in (1) glycerine–water solutions, chosen to approximate the range of viscosities of bone marrow, and (2) water in which 3, 5, 10 and 100% of the total suspended microspheres are coated with magnetic nanoparticles, to model collection of rare magnetic nanoparticle-coated cells from a mixture of cells in a fluid. The magnetic microspheres were collected on a magnetic needle, and we demonstrate that the collection efficiency versus time can be modeled using a simple, heuristically-derived function, with three physically-significant parameters. The function enables experimentally-obtained collection efficiencies to be scaled to extract the effective drag of the suspending medium. The results of this analysis demonstrate that the effective drag scales linearly with fluid viscosity, as expected. Surprisingly, increasing the number of non-magnetic microspheres in the suspending fluid results increases the collection of magnetic microspheres, corresponding to a decrease in the effective drag of the medium. (paper)
Modeling the efficiency of a magnetic needle for collecting magnetic cells
Butler, Kimberly S.; Adolphi, Natalie L.; Bryant, H. C.; Lovato, Debbie M.; Larson, Richard S.; Flynn, Edward R.
2014-07-01
As new magnetic nanoparticle-based technologies are developed and new target cells are identified, there is a critical need to understand the features important for magnetic isolation of specific cells in fluids, an increasingly important tool in disease research and diagnosis. To investigate magnetic cell collection, cell-sized spherical microparticles, coated with superparamagnetic nanoparticles, were suspended in (1) glycerine-water solutions, chosen to approximate the range of viscosities of bone marrow, and (2) water in which 3, 5, 10 and 100% of the total suspended microspheres are coated with magnetic nanoparticles, to model collection of rare magnetic nanoparticle-coated cells from a mixture of cells in a fluid. The magnetic microspheres were collected on a magnetic needle, and we demonstrate that the collection efficiency versus time can be modeled using a simple, heuristically-derived function, with three physically-significant parameters. The function enables experimentally-obtained collection efficiencies to be scaled to extract the effective drag of the suspending medium. The results of this analysis demonstrate that the effective drag scales linearly with fluid viscosity, as expected. Surprisingly, increasing the number of non-magnetic microspheres in the suspending fluid results increases the collection of magnetic microspheres, corresponding to a decrease in the effective drag of the medium.
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.
Lu, Huijie; Peng, Zhangli
2017-11-01
Our goal is to develop a high-efficiency multiscale modeling method to predict the stress and deformation of cells during the interactions with their microenvironments in microcirculation and microfluidic devices, including red blood cells (RBCs) and circulating tumor cells (CTCs). There are more than 1 billion people in the world suffering from RBC diseases, e.g. anemia, sickle cell diseases, and malaria. The mechanical properties of RBCs are changed in these diseases due to molecular structure alternations, which is not only important for understanding the disease pathology but also provides an opportunity for diagnostics. On the other hand, the mechanical properties of cancer cells are also altered compared to healthy cells. This can lead to acquired ability to cross the narrow capillary networks and endothelial gaps, which is crucial for metastasis, the leading cause of cancer mortality. Therefore, it is important to predict the deformation and stress of RBCs and CTCs in microcirculations. We are developing a high-efficiency multiscale model of cell-fluid interaction to study these two topics.
Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar
2017-03-01
Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lea, Amanda J.
2015-01-01
Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596
An efficient mathematical model for air-breathing PEM fuel cells
International Nuclear Information System (INIS)
Ismail, M.S.; Ingham, D.B.; Hughes, K.J.; Ma, L.; Pourkashanian, M.
2014-01-01
Graphical abstract: The effects of the ambient humidity on the performance of air-breathing PEM fuel cells become more pronounced as the ambient temperature increases. The polarisation curves have been generated using the in-house developed MATLAB® application, Polarisation Curve Generator, which is available in the supplementary data. - Highlights: • An efficient mathematical model has been developed for an air-breathing PEM fuel cell. • The fuel cell performance is significantly over-predicted if the Joule and entropic heats are neglected. • The fuel cell performance is highly sensitive to the state of water at the thermodynamic equilibrium. • The cell potential dictates the favourable ambient conditions for the fuel cell. - Abstract: A simple and efficient mathematical model for air-breathing proton exchange membrane (PEM) fuel cells has been built. One of the major objectives of this study is to investigate the effects of the Joule and entropic heat sources, which are often neglected, on the performance of air-breathing PEM fuel cells. It is found that the fuel cell performance is significantly over-predicted if one or both of these heat sources is not incorporated into the model. Also, it is found that the performance of the fuel cell is highly sensitive to the state of the water at the thermodynamic equilibrium magnitude as both the entropic heat and the Nernst potential considerably increase if water is assumed to be produced in liquid form rather than in vapour form. Further, the heat of condensation is shown to be small and therefore, under single-phase modelling, has a negligible effect on the performance of the fuel cell. Finally, the favourable ambient conditions depend on the operating cell potential. At intermediate cell potentials, a mild ambient temperature and low humidity are favoured to maintain high membrane conductivity and mitigate water flooding. At low cell potentials, low ambient temperature and high humidity are favoured to
International Nuclear Information System (INIS)
Han, Yongming; Geng, Zhiqiang; Zhu, Qunxiong; Qu, Yixin
2015-01-01
DEA (data envelopment analysis) has been widely used for the efficiency analysis of industrial production process. However, the conventional DEA model is difficult to analyze the pros and cons of the multi DMUs (decision-making units). The DEACM (DEA cross-model) can distinguish the pros and cons of the effective DMUs, but it is unable to take the effect of the uncertainty data into account. This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with Fuzzy Data. The proposed method has better objectivity and resolving power for the decision-making. First we obtain the minimum, the median and the maximum values of the multi-criteria ethylene energy consumption data by the data fuzzification. On the basis of the multi-criteria fuzzy data, the benchmark of the effective production situations and the improvement directions of the ineffective of the ethylene plants under different production data configurations are obtained by the FDEACM. The experimental result shows that the proposed method can improve the ethylene production conditions and guide the efficiency of energy utilization during ethylene production process. - Highlights: • This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with data fuzzification. • The proposed method is more efficient and accurate than other methods. • We obtain an energy efficiency analysis framework and process based on FDEACM in ethylene production industry. • The proposed method is valid and efficient in improvement of energy efficiency in the ethylene plants
International Nuclear Information System (INIS)
Klein, K.M.; Park, C.; Yang, S.; Morris, S.; Do, V.; Tasch, F.
1992-01-01
We have developed a new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon. This paper reports that this new model is based on the dual Pearson semi-empirical implant depth profile model and the UT-MARLOWE Monte Carlo boron ion implantation model. This new model can predict with very high computational efficiency two-dimensional as-implanted boron profiles as a function of energy, dose, tilt angle, rotation angle, masking edge orientation, and masking edge thickness
Dodov, B.
2017-12-01
Stochastic simulation of realistic and statistically robust patterns of Tropical Cyclone (TC) induced precipitation is a challenging task. It is even more challenging in a catastrophe modeling context, where tens of thousands of typhoon seasons need to be simulated in order to provide a complete view of flood risk. Ultimately, one could run a coupled global climate model and regional Numerical Weather Prediction (NWP) model, but this approach is not feasible in the catastrophe modeling context and, most importantly, may not provide TC track patterns consistent with observations. Rather, we propose to leverage NWP output for the observed TC precipitation patterns (in terms of downscaled reanalysis 1979-2015) collected on a Lagrangian frame along the historical TC tracks and reduced to the leading spatial principal components of the data. The reduced data from all TCs is then grouped according to timing, storm evolution stage (developing, mature, dissipating, ETC transitioning) and central pressure and used to build a dictionary of stationary (within a group) and non-stationary (for transitions between groups) covariance models. Provided that the stochastic storm tracks with all the parameters describing the TC evolution are already simulated, a sequence of conditional samples from the covariance models chosen according to the TC characteristics at a given moment in time are concatenated, producing a continuous non-stationary precipitation pattern in a Lagrangian framework. The simulated precipitation for each event is finally distributed along the stochastic TC track and blended with a non-TC background precipitation using a data assimilation technique. The proposed framework provides means of efficient simulation (10000 seasons simulated in a couple of days) and robust typhoon precipitation patterns consistent with observed regional climate and visually undistinguishable from high resolution NWP output. The framework is used to simulate a catalog of 10000 typhoon
AN INTEGRATED MODELING FRAMEWORK FOR ENVIRONMENTALLY EFFICIENT CAR OWNERSHIP AND TRIP BALANCE
Directory of Open Access Journals (Sweden)
Tao FENG
2008-01-01
Full Text Available Urban transport emissions generated by automobile trips are greatly responsible for atmospheric pollution in both developed and developing countries. To match the long-term target of sustainable development, it seems to be important to specify the feasible level of car ownership and travel demand from environmental considerations. This research intends to propose an integrated modeling framework for optimal construction of a comprehensive transportation system by taking into consideration environmental constraints. The modeling system is actually a combination of multiple essential models and illustrated by using a bi-level programming approach. In the upper level, the maximization of both total car ownership and total number of trips by private and public travel modes is set as the objective function and as the constraints, the total emission levels at all the zones are set to not exceed the relating environmental capacities. Maximizing the total trips by private and public travel modes allows policy makers to take into account trip balance to meet both the mobility levels required by travelers and the environmentally friendly transportation system goals. The lower level problem is a combined trip distribution and assignment model incorporating traveler's route choice behavior. A logit-type aggregate modal split model is established to connect the two level problems. In terms of the solution method for the integrated model, a genetic algorithm is applied. A case study is conducted using road network data and person-trip (PT data collected in Dalian city, China. The analysis results showed that the amount of environmentally efficient car ownership and number of trips by different travel modes could be obtained simultaneously when considering the zonal control of environmental capacity within the framework of the proposed integrated model. The observed car ownership in zones could be increased or decreased towards the macroscopic optimization
Examining the cost efficiency of Chinese hydroelectric companies using a finite mixture model
International Nuclear Information System (INIS)
Barros, Carlos Pestana; Chen, Zhongfei; Managi, Shunsuke; Antunes, Olinda Sequeira
2013-01-01
This paper evaluates the operational activities of Chinese hydroelectric power companies over the period 2000–2010 using a finite mixture model that controls for unobserved heterogeneity. In so doing, a stochastic frontier latent class model, which allows for the existence of different technologies, is adopted to estimate cost frontiers. This procedure not only enables us to identify different groups among the hydro-power companies analysed, but also permits the analysis of their cost efficiency. The main result is that three groups are identified in the sample, each equipped with different technologies, suggesting that distinct business strategies need to be adapted to the characteristics of China's hydro-power companies. Some managerial implications are developed. - Highlights: ► This paper evaluates the operational activities of Chinese electricity hydric companies. ► This study uses data from 2000 to 2010 using a finite mixture model. ► The model procedure identifies different groups of Chinese hydric companies analysed. ► Three groups are identified in the sample, each equipped with completely different “technologies”. ► This suggests that distinct business strategies need to be adapted to the characteristics of the hydric companies
International Nuclear Information System (INIS)
Kapuria, S; Yaqoob Yasin, M
2013-01-01
In this work, we present an electromechanically coupled efficient layerwise finite element model for the static response of piezoelectric laminated composite and sandwich plates, considering the nonlinear behavior of piezoelectric materials under strong electric field. The nonlinear model is developed consistently using a variational principle, considering a rotationally invariant second order nonlinear constitutive relationship, and full electromechanical coupling. In the piezoelectric layer, the electric potential is approximated to have a quadratic variation across the thickness, as observed from exact three dimensional solutions, and the equipotential condition of electroded piezoelectric surfaces is modeled using the novel concept of an electric node. The results predicted by the nonlinear model compare very well with the experimental data available in the literature. The effect of the piezoelectric nonlinearity on the static response and deflection/stress control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, angle-ply composite and sandwich substrates. For high electric fields, the difference between the nonlinear and linear predictions is large, and cannot be neglected. The error in the prediction of the smeared counterpart of the present theory with the same number of primary displacement unknowns is also examined. (paper)
The Efficiency of a Hybrid Flapping Wing Structure—A Theoretical Model Experimentally Verified
Directory of Open Access Journals (Sweden)
Yuval Keren
2016-07-01
Full Text Available To propel a lightweight structure, a hybrid wing structure was designed; the wing’s geometry resembled a rotor blade, and its flexibility resembled an insect’s flapping wing. The wing was designed to be flexible in twist and spanwise rigid, thus maintaining the aeroelastic advantages of a flexible wing. The use of a relatively “thick” airfoil enabled the achievement of higher strength to weight ratio by increasing the wing’s moment of inertia. The optimal design was based on a simplified quasi-steady inviscid mathematical model that approximately resembles the aerodynamic and inertial behavior of the flapping wing. A flapping mechanism that imitates the insects’ flapping pattern was designed and manufactured, and a set of experiments for various parameters was performed. The simplified analytical model was updated according to the tests results, compensating for the viscid increase of drag and decrease of lift, that were neglected in the simplified calculations. The propelling efficiency of the hovering wing at various design parameters was calculated using the updated model. It was further validated by testing a smaller wing flapping at a higher frequency. Good and consistent test results were obtained in line with the updated model, yielding a simple, yet accurate tool, for flapping wings design.
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
A cervid vocal fold model suggests greater glottal efficiency in calling at high frequencies.
Directory of Open Access Journals (Sweden)
Ingo R Titze
2010-08-01
Full Text Available Male Rocky Mountain elk (Cervus elaphus nelsoni produce loud and high fundamental frequency bugles during the mating season, in contrast to the male European Red Deer (Cervus elaphus scoticus who produces loud and low fundamental frequency roaring calls. A critical step in understanding vocal communication is to relate sound complexity to anatomy and physiology in a causal manner. Experimentation at the sound source, often difficult in vivo in mammals, is simulated here by a finite element model of the larynx and a wave propagation model of the vocal tract, both based on the morphology and biomechanics of the elk. The model can produce a wide range of fundamental frequencies. Low fundamental frequencies require low vocal fold strain, but large lung pressure and large glottal flow if sound intensity level is to exceed 70 dB at 10 m distance. A high-frequency bugle requires both large muscular effort (to strain the vocal ligament and high lung pressure (to overcome phonation threshold pressure, but at least 10 dB more intensity level can be achieved. Glottal efficiency, the ration of radiated sound power to aerodynamic power at the glottis, is higher in elk, suggesting an advantage of high-pitched signaling. This advantage is based on two aspects; first, the lower airflow required for aerodynamic power and, second, an acoustic radiation advantage at higher frequencies. Both signal types are used by the respective males during the mating season and probably serve as honest signals. The two signal types relate differently to physical qualities of the sender. The low-frequency sound (Red Deer call relates to overall body size via a strong relationship between acoustic parameters and the size of vocal organs and body size. The high-frequency bugle may signal muscular strength and endurance, via a 'vocalizing at the edge' mechanism, for which efficiency is critical.
An Efficient Reduced-Order Model for the Nonlinear Dynamics of Carbon Nanotubes
Xu, Tiantian
2014-08-17
Because of the inherent nonlinearities involving the behavior of CNTs when excited by electrostatic forces, modeling and simulating their behavior is challenging. The complicated form of the electrostatic force describing the interaction of their cylindrical shape, forming upper electrodes, to lower electrodes poises serious computational challenges. This presents an obstacle against applying and using several nonlinear dynamics tools that typically used to analyze the behavior of complicated nonlinear systems, such as shooting, continuation, and integrity analysis techniques. This works presents an attempt to resolve this issue. We present an investigation of the nonlinear dynamics of carbon nanotubes when actuated by large electrostatic forces. We study expanding the complicated form of the electrostatic force into enough number of terms of the Taylor series. We plot and compare the expanded form of the electrostatic force to the exact form and found that at least twenty terms are needed to capture accurately the strong nonlinear form of the force over the full range of motion. Then, we utilize this form along with an Euler–Bernoulli beam model to study the static and dynamic behavior of CNTs. The geometric nonlinearity and the nonlinear electrostatic force are considered. An efficient reduced-order model (ROM) based on the Galerkin method is developed and utilized to simulate the static and dynamic responses of the CNTs. We found that the use of the new expanded form of the electrostatic force enables avoiding the cumbersome evaluation of the spatial integrals involving the electrostatic force during the modal projection procedure in the Galerkin method, which needs to be done at every time step. Hence, the new method proves to be much more efficient computationally.
Wolfs, Vincent; Willems, Patrick
2015-04-01
Water managers rely increasingly on mathematical simulation models that represent individual parts of the water system, such as the river, sewer system or waste water treatment plant. The current evolution towards integral water management requires the integration of these distinct components, leading to an increased model scale and scope. Besides this growing model complexity, certain applications gained interest and importance, such as uncertainty and sensitivity analyses, auto-calibration of models and real time control. All these applications share the need for models with a very limited calculation time, either for performing a large number of simulations, or a long term simulation followed by a statistical post-processing of the results. The use of the commonly applied detailed models that solve (part of) the de Saint-Venant equations is infeasible for these applications or such integrated modelling due to several reasons, of which a too long simulation time and the inability to couple submodels made in different software environments are the main ones. Instead, practitioners must use simplified models for these purposes. These models are characterized by empirical relationships and sacrifice model detail and accuracy for increased computational efficiency. The presented research discusses the development of a flexible integral modelling platform that complies with the following three key requirements: (1) Include a modelling approach for water quantity predictions for rivers, floodplains, sewer systems and rainfall runoff routing that require a minimal calculation time; (2) A fast and semi-automatic model configuration, thereby making maximum use of data of existing detailed models and measurements; (3) Have a calculation scheme based on open source code to allow for future extensions or the coupling with other models. First, a novel and flexible modular modelling approach based on the storage cell concept was developed. This approach divides each
International Nuclear Information System (INIS)
Cerezo Davila, Carlos; Reinhart, Christoph F.; Bemis, Jamie L.
2016-01-01
City governments and energy utilities are increasingly focusing on the development of energy efficiency strategies for buildings as a key component in emission reduction plans and energy supply strategies. To support these diverse needs, a new generation of Urban Building Energy Models (UBEM) is currently being developed and validated to estimate citywide hourly energy demands at the building level. However, in order for cities to rely on UBEMs, effective model generation and maintenance workflows are needed based on existing urban data structures. Within this context, the authors collaborated with the Boston Redevelopment Authority to develop a citywide UBEM based on official GIS datasets and a custom building archetype library. Energy models for 83,541 buildings were generated and assigned one of 52 use/age archetypes, within the CAD modelling environment Rhinoceros3D. The buildings were then simulated using the US DOE EnergyPlus simulation program, and results for buildings of the same archetype were crosschecked against data from the US national energy consumption surveys. A district-level intervention combining photovoltaics with demand side management is presented to demonstrate the ability of UBEM to provide actionable information. Lack of widely available archetype templates and metered energy data, were identified as key barriers within existing workflows that may impede cities from effectively applying UBEM to guide energy policy. - Highlights: • Data requirements for Urban Building Energy Models are reviewed. • A workflow for UBEM generation from available GIS datasets is developed. • A citywide demand simulation model for Boston is generated and tested. • Limitations for UBEM in current urban data systems are identified and discussed. • Model application for energy management policy is shown in an urban PV scenario.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
International Nuclear Information System (INIS)
Hillary, Jason; Walsh, Ed; Shah, Amip; Zhou, Rongliang; Walsh, Pat
2017-01-01
Improving building energy efficiency is of paramount importance due to the large proportion of energy consumed by thermal operations. Consequently, simulating a building's environment has gained popularity for assessing thermal comfort and design. The extended timeframes and large physical scales involved necessitate compact modelling approaches. The accuracy of such simulations is of chief concern, yet there is little guidance offered on achieving accurate solutions whilst mitigating prohibitive computational costs. Therefore, the present study addresses this deficit by providing clear guidance on discretisation levels required for achieving accurate but computationally inexpensive models. This is achieved by comparing numerical models of varying discretisation levels to benchmark analytical solutions with prediction accuracy assessed and reported in terms of governing dimensionless parameters, Biot and Fourier numbers, to ensure generality of findings. Furthermore, spatial and temporal discretisation errors are separated and assessed independently. Contour plots are presented to intuitively determine the optimal discretisation levels and time-steps required to achieve accurate thermal response predictions. Simulations derived from these contour plots were tested against various building conditions with excellent agreement observed throughout. Additionally, various scenarios are highlighted where the classical single lumped capacitance model can be applied for Biot numbers much greater than 0.1 without reducing accuracy. - Highlights: • Addressing the problems of inadequate discretisation within building energy models. • Accuracy of numerical models assessed against analytical solutions. • Fourier and Biot numbers used to provide generality of results for any material. • Contour plots offer intuitive way to interpret results for manual discretisation. • Results show proposed technique promising for automation of discretisation process.
Pipeline for Efficient Mapping of Transcription Factor Binding Sites and Comparison of Their Models
Ba alawi, Wail
2011-06-01
The control of genes in every living organism is based on activities of transcription factor (TF) proteins. These TFs interact with DNA by binding to the TF binding sites (TFBSs) and in that way create conditions for the genes to activate. Of the approximately 1500 TFs in human, TFBSs are experimentally derived only for less than 300 TFs and only in generally limited portions of the genome. To be able to associate TF to genes they control we need to know if TFs will have a potential to interact with the control region of the gene. For this we need to have models of TFBS families. The existing models are not sufficiently accurate or they are too complex for use by ordinary biologists. To remove some of the deficiencies of these models, in this study we developed a pipeline through which we achieved the following: 1. Through a comparison analysis of the performance we identified the best models with optimized thresholds among the four different types of models of TFBS families. 2. Using the best models we mapped TFBSs to the human genome in an efficient way. The study shows that a new scoring function used with TFBS models based on the position weight matrix of dinucleotides with remote dependency results in better accuracy than the other three types of the TFBS models. The speed of mapping has been improved by developing a parallelized code and shows a significant speed up of 4x when going from 1 CPU to 8 CPUs. To verify if the predicted TFBSs are more accurate than what can be expected with the conventional models, we identified the most frequent pairs of TFBSs (for TFs E4F1 and ATF6) that appeared close to each other (within the distance of 200 nucleotides) over the human genome. We show unexpectedly that the genes that are most close to the multiple pairs of E4F1/ATF6 binding sites have a co-expression of over 90%. This indirectly supports our hypothesis that the TFBS models we use are more accurate and also suggests that the E4F1/ATF6 pair is exerting the
Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks
Directory of Open Access Journals (Sweden)
Liang Jinghang
2012-08-01
Full Text Available Abstract Background Various computational models have been of interest due to their use in the modelling of gene regulatory networks (GRNs. As a logical model, probabilistic Boolean networks (PBNs consider molecular and genetic noise, so the study of PBNs provides significant insights into the understanding of the dynamics of GRNs. This will ultimately lead to advances in developing therapeutic methods that intervene in the process of disease development and progression. The applications of PBNs, however, are hindered by the complexities involved in the computation of the state transition matrix and the steady-state distribution of a PBN. For a PBN with n genes and N Boolean networks, the complexity to compute the state transition matrix is O(nN22n or O(nN2n for a sparse matrix. Results This paper presents a novel implementation of PBNs based on the notions of stochastic logic and stochastic computation. This stochastic implementation of a PBN is referred to as a stochastic Boolean network (SBN. An SBN provides an accurate and efficient simulation of a PBN without and with random gene perturbation. The state transition matrix is computed in an SBN with a complexity of O(nL2n, where L is a factor related to the stochastic sequence length. Since the minimum sequence length required for obtaining an evaluation accuracy approximately increases in a polynomial order with the number of genes, n, and the number of Boolean networks, N, usually increases exponentially with n, L is typically smaller than N, especially in a network with a large number of genes. Hence, the computational efficiency of an SBN is primarily limited by the number of genes, but not directly by the total possible number of Boolean networks. Furthermore, a time-frame expanded SBN enables an efficient analysis of the steady-state distribution of a PBN. These findings are supported by the simulation results of a simplified p53 network, several randomly generated networks and a
Directory of Open Access Journals (Sweden)
Andrej Ficko
2015-03-01
Full Text Available Underuse of nonindustrial private forests in developed countries has been interpreted mostly as a consequence of the prevailing noncommodity objectives of their owners. Recent empirical studies have indicated a correlation between the harvesting behavior of forest owners and the specific conceptualization of appropriate forest management described as "nonintervention" or "hands-off" management. We aimed to fill the huge gap in knowledge of social representations of forest management in Europe and are the first to be so rigorous in eliciting forest owner representations in Europe. We conducted 3099 telephone interviews with randomly selected forest owners in Slovenia, asking them whether they thought they managed their forest efficiently, what the possible reasons for underuse were, and what they understood by forest management. Building on social representations theory and applying a series of structural equation models, we tested the existence of three latent constructs of forest management and estimated whether and how much these constructs correlated to the perception of resource efficiency. Forest owners conceptualized forest management as a mixture of maintenance and ecosystem-centered and economics-centered management. None of the representations had a strong association with the perception of resource efficiency, nor could it be considered a factor preventing forest owners from cutting more. The underuse of wood resources was mostly because of biophysical constraints in the environment and not a deep-seated philosophical objection to harvesting. The difference between our findings and other empirical studies is primarily explained by historical differences in forestland ownership in different parts of Europe and the United States, the rising number of nonresidential owners, alternative lifestyle, and environmental protectionism, but also as a consequence of our high methodological rigor in testing the relationships between the constructs
Oakey, Zack B; Jensen, Jason D; Zaugg, Brian E; Radmall, Bryce R; Pettey, Jeff H; Olson, Randall J
2013-08-01
To validate a porcine lens model by comparing density and ultrasound (US) with known human standards using the Infiniti Ozil with Intelligent Phacoemulsification (torsional), Whitestar Signature Micropulse (longitudinal), and Ellips FX (transversal) modalities. Department of Ophthalmology and Visual Sciences, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, USA. Experimental study. Lens nuclei were formalin soaked in hour-based intervals and divided into 2.0 mm cubes. Density was characterized by crushing experiments and compared with known human measures. Efficiency and chatter were examined. The mean weight to cut thickness in half ranged from 16.9 g ± 5.5 (SD) in the 0-hour group to 121.3 ± 47.5 gm in the 4-hour group. Lenses in the 2-hour group (mean 70.2 ± 19.1 g) best matched human density (P=.215). The mean efficiency ranged from 0.432 ± 0.178 seconds to 9.111 ± 2.925 seconds; chatter ranged from zero to 1.85 ± 1.927 bounces. No significant difference was detected when comparing the 2-hour formalin group with human lenses in torsional and transversal US. There was no significant difference between transversal and torsional modalities, consistent with human studies. Although longitudinal (6 milliseconds on, 12 milliseconds off) was significantly more efficient at 50% power than at 25%, there was no significant difference compared with transversal or torsional US. Animal lenses soaked for 2 hours in formalin were most comparable to human lenses. Longitudinal US may be an acceptable alternative to torsional and transversal US. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Oh, K. S.; Schutt-Aine, J.
1995-01-01
Modeling of interconnects and associated discontinuities with the recent advances high-speed digital circuits has gained a considerable interest over the last decade although the theoretical bases for analyzing these structures were well-established as early as the 1960s. Ongoing research at the present time is focused on devising methods which can be applied to more general geometries than the ones considered in earlier days and, at the same time, improving the computational efficiency and accuracy of these methods. In this thesis, numerically efficient methods to compute the transmission line parameters of a multiconductor system and the equivalent capacitances of various strip discontinuities are presented based on the quasi-static approximation. The presented techniques are applicable to conductors embedded in an arbitrary number of dielectric layers with two possible locations of ground planes at the top and bottom of the dielectric layers. The cross-sections of conductors can be arbitrary as long as they can be described with polygons. An integral equation approach in conjunction with the collocation method is used in the presented methods. A closed-form Green's function is derived based on weighted real images thus avoiding nested infinite summations in the exact Green's function; therefore, this closed-form Green's function is numerically more efficient than the exact Green's function. All elements associated with the moment matrix are computed using the closed-form formulas. Various numerical examples are considered to verify the presented methods, and a comparison of the computed results with other published results showed good agreement.
How efficiently do corn- and soybean-based cropping systems use water? A systems modeling analysis.
Dietzel, Ranae; Liebman, Matt; Ewing, Robert; Helmers, Matt; Horton, Robert; Jarchow, Meghann; Archontoulis, Sotirios
2016-02-01
Agricultural systems are being challenged to decrease water use and increase production while climate becomes more variable and the world's population grows. Low water use efficiency is traditionally characterized by high water use relative to low grain production and usually occurs under dry conditions. However, when a cropping system fails to take advantage of available water during wet conditions, this is also an inefficiency and is often detrimental to the environment. Here, we provide a systems-level definition of water use efficiency (sWUE) that addresses both production and environmental quality goals through incorporating all major system water losses (evapotranspiration, drainage, and runoff). We extensively calibrated and tested the Agricultural Production Systems sIMulator (APSIM) using 6 years of continuous crop and soil measurements in corn- and soybean-based cropping systems in central Iowa, USA. We then used the model to determine water use, loss, and grain production in each system and calculated sWUE in years that experienced drought, flood, or historically average precipitation. Systems water use efficiency was found to be greatest during years with average precipitation. Simulation analysis using 28 years of historical precipitation data, plus the same dataset with ± 15% variation in daily precipitation, showed that in this region, 430 mm of seasonal (planting to harvesting) rainfall resulted in the optimum sWUE for corn, and 317 mm for soybean. Above these precipitation levels, the corn and soybean yields did not increase further, but the water loss from the system via runoff and drainage increased substantially, leading to a high likelihood of soil, nutrient, and pesticide movement from the field to waterways. As the Midwestern United States is predicted to experience more frequent drought and flood, inefficiency of cropping systems water use will also increase. This work provides a framework to concurrently evaluate production and
Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering
Koehler, Sarah Muraoka
Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is
Econometric models for distinguishing between market-driven and publicly-funded energy efficiency
International Nuclear Information System (INIS)
Horowitz, Marvin J.
2005-01-01
Central to the problem of estimating energy program benefits is the necessity to differentiate between changes in energy use that would have occurred in the absence of public programs versus declines in energy use that would not have occurred but for public programs. The former changes are often referred to as naturally-occurring or market-driven effects. They occur due to a combination of one or more independent variables, such as changes in prices, incomes, weather, and technology. For a rigorous, scientifically-valid program evaluation, it is essential to first control for these variables before making statistical inferences related to public program effects. This paper describes the economic and statistical issues surrounding quantitative studies of energy use, energy efficiency, and public programs. To illustrate the strengths and weaknesses of different impact evaluation approaches, this paper describes three new studies related to electricity use in the U. S. commercial buildings sector. Specification and estimation of time series and cross section econometric models are discussed, as are their capabilities for obtaining long-run estimates of the net impacts of energy efficiency programs
Aerodynamic efficiency of flapping flight: analysis of a two-stroke model.
Wang, Z Jane
2008-01-01
To seek the simplest efficient flapping wing motions and understand their relation to steady flight, a two-stroke model in the quasi-steady limit was analyzed. It was found that a family of two-stroke flapping motions have aerodynamic efficiency close to, but slightly lower than, the optimal steady flight. These two-stroke motions share two common features: the downstroke is a gliding motion and the upstroke has an angle of attack close to the optimal of the steady flight of the same wing. With the reduced number of parameters, the aerodynamic cost function in the parameter space can be visualized. This was examined for wings of different lift and drag characteristics at Reynolds numbers between 10(2) and 10(6). The iso-surfaces of the cost function have a tube-like structure, implying that the solution is insensitive to a specific direction in the parameter space. Related questions in insect flight that motivated this work are discussed.
Survey on efficient linear solvers for porous media flow models on recent hardware architectures
International Nuclear Information System (INIS)
Anciaux-Sedrakian, Ani; Gratien, Jean-Marc; Guignon, Thomas; Gottschling, Peter
2014-01-01
In the past few years, High Performance Computing (HPC) technologies led to General Purpose Processing on Graphics Processing Units (GPGPU) and many-core architectures. These emerging technologies offer massive processing units and are interesting for porous media flow simulators may used for CO 2 geological sequestration or Enhanced Oil Recovery (EOR) simulation. However the crucial point is 'are current algorithms and software able to use these new technologies efficiently?' The resolution of large sparse linear systems, almost ill-conditioned, constitutes the most CPU-consuming part of such simulators. This paper proposes a survey on various solver and pre-conditioner algorithms, analyzes their efficiency and performance regarding these distinct architectures. Furthermore it proposes a novel approach based on a hybrid programming model for both GPU and many-core clusters. The proposed optimization techniques are validated through a Krylov subspace solver; BiCGStab and some pre-conditioners like ILU0 on GPU, multi-core and many-core architectures, on various large real study cases in EOR simulation. (authors)
Directory of Open Access Journals (Sweden)
Jeanette Janaina Jaber Lucato
2009-06-01
Full Text Available OBJECTIVES: To evaluate and compare the efficiency of humidification in available heat and moisture exchanger models under conditions of varying tidal volume, respiratory rate, and flow rate. INTRODUCTION: Inspired gases are routinely preconditioned by heat and moisture exchangers to provide a heat and water content similar to that provided normally by the nose and upper airways. The absolute humidity of air retrieved from and returned to the ventilated patient is an important measurable outcome of the heat and moisture exchangers' humidifying performance. METHODS: Eight different heat and moisture exchangers were studied using a respiratory system analog. The system included a heated chamber (acrylic glass, maintained at 37°C, a preserved swine lung, a hygrometer, circuitry and a ventilator. Humidity and temperature levels were measured using eight distinct interposed heat and moisture exchangers given different tidal volumes, respiratory frequencies and flow-rate conditions. Recovery of absolute humidity (%RAH was calculated for each setting. RESULTS: Increasing tidal volumes led to a reduction in %RAH for all heat and moisture exchangers while no significant effect was demonstrated in the context of varying respiratory rate or inspiratory flow. CONCLUSIONS: Our data indicate that heat and moisture exchangers are more efficient when used with low tidal volume ventilation. The roles of flow and respiratory rate were of lesser importance, suggesting that their adjustment has a less significant effect on the performance of heat and moisture exchangers.
Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%
Beiley, Zach M.
2012-01-01
It is estimated that for photovoltaics to reach grid parity around the planet, they must be made with costs under $0.50 per W p and must also achieve power conversion efficiencies above 20% in order to keep installation costs down. In this work we explore a novel solar cell architecture, a hybrid tandem photovoltaic (HTPV), and show that it is capable of meeting these targets. HTPV is composed of an inexpensive and low temperature processed solar cell, such as an organic or dye-sensitized solar cell, that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost of the modules by ∼15% to 20% and the cost of installation by up to 30%. This suggests that HTPV is a promising option for producing solar power that matches the cost of existing grid energy. © 2012 The Royal Society of Chemistry.
International Nuclear Information System (INIS)
Rogge, Nicky; De Jaeger, Simon
2012-01-01
Highlights: ► Complexity in local waste management calls for more in depth efficiency analysis. ► Shared-input Data Envelopment Analysis can provide solution. ► Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted “shared-input” version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities’ cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.
International Nuclear Information System (INIS)
Horowitz, Marvin J.; Bertoldi, Paolo
2015-01-01
This study is an impact analysis of European Union (EU) energy efficiency policy that employs both top-down energy consumption data and bottom-up energy efficiency statistics or indicators. As such, it may be considered a contribution to the effort called for in the EU's 2006 Energy Services Directive (ESD) to develop a harmonized calculation model. Although this study does not estimate the realized savings from individual policy measures, it does provide estimates of realized energy savings for energy efficiency policy measures in aggregate. Using fixed effects panel models, the annual cumulative savings in 2011 of combined household and manufacturing sector electricity and natural gas usage attributed to EU energy efficiency policies since 2000 is estimated to be 1136 PJ; the savings attributed to energy efficiency policies since 2006 is estimated to be 807 PJ, or the equivalent of 5.6% of 2011 EU energy consumption. As well as its contribution to energy efficiency policy analysis, this study adds to the development of methods that can improve the quality of information provided by standardized energy efficiency and sustainable resource indexes. - Highlights: • Impact analysis of European Union energy efficiency policy. • Harmonization of top-down energy consumption and bottom-up energy efficiency indicators. • Fixed effects models for Member States for household and manufacturing sectors and combined electricity and natural gas usage. • EU energy efficiency policies since 2000 are estimated to have saved 1136 Petajoules. • Energy savings attributed to energy efficiency policies since 2006 are 5.6 percent of 2011 combined electricity and natural gas usage.
International Nuclear Information System (INIS)
Jackson, Jerry
2010-01-01
Electric utilities and regulators face difficult challenges evaluating new energy efficiency and smart grid programs prompted, in large part, by recent state and federal mandates and financial incentives. It is increasingly difficult to separate electricity use impacts of individual utility programs from the impacts of increasingly stringent appliance and building efficiency standards, increasing electricity prices, appliance manufacturer efficiency improvements, energy program interactions and other factors. This study reviews traditional approaches used to evaluate electric utility energy efficiency and smart-grid programs and presents an agent-based end-use modeling approach that resolves many of the shortcomings of traditional approaches. Data for a representative sample of utility customers in a Midwestern US utility are used to evaluate energy efficiency and smart grid program targets over a fifteen-year horizon. Model analysis indicates that a combination of the two least stringent efficiency and smart grid program scenarios provides peak hour reductions one-third greater than the most stringent smart grid program suggesting that reductions in peak demand requirements are more feasible when both efficiency and smart grid programs are considered together. Suggestions on transitioning from traditional end-use models to agent-based end-use models are provided.
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)
2017-07-15
Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.
Marker encoded fringe projection profilometry for efficient 3D model acquisition.
Budianto, B; Lun, P K D; Hsung, Tai-Chiu
2014-11-01
This paper presents a novel marker encoded fringe projection profilometry (FPP) scheme for efficient 3-dimensional (3D) model acquisition. Traditional FPP schemes can introduce large errors to the reconstructed 3D model when the target object has an abruptly changing height profile. For the proposed scheme, markers are encoded in the projected fringe pattern to resolve the ambiguities in the fringe images due to that problem. Using the analytic complex wavelet transform, the marker cue information can be extracted from the fringe image, and is used to restore the order of the fringes. A series of simulations and experiments have been carried out to verify the proposed scheme. They show that the proposed method can greatly improve the accuracy over the traditional FPP schemes when reconstructing the 3D model of objects with abruptly changing height profile. Since the scheme works directly in our recently proposed complex wavelet FPP framework, it enjoys the same properties that it can be used in real time applications for color objects.
Efficient parallel implementation of active appearance model fitting algorithm on GPU.
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Directory of Open Access Journals (Sweden)
Jinwei Wang
2014-01-01
Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
A mathematical model of capacious and efficient memory that survives trauma
Srivastava, Vipin; Edwards, S. F.
2004-02-01
The brain's memory system can store without any apparent constraint, it recalls stored information efficiently and it is robust against lesion. Existing models of memory do not fully account for all these features. The model due to Hopfield (Proc. Natl. Acad. Sci. USA 79 (1982) 2554) based on Hebbian learning (The Organization of Behaviour, Wiley, New York, 1949) shows an early saturation of memory with the retrieval from memory becoming slow and unreliable before collapsing at this limit. Our hypothesis (Physica A 276 (2000) 352) that the brain might store orthogonalized information improved the situation in many ways but was still constrained in that the information to be stored had to be linearly independent, i.e., signals that could be expressed as linear combinations of others had to be excluded. Here we present a model that attempts to address the problem quite comprehensively in the background of the above attributes of the brain. We demonstrate that if the brain devolves incoming signals in analogy with Fourier analysis, the noise created by interference of stored signals diminishes systematically (which yields prompt retrieval) and most importantly it can withstand partial damages to the brain.
The European Model Company Act: How to choose an efficient regulatory approach?
DEFF Research Database (Denmark)
Cleff, Evelyne Beatrix
) on the organization of company laws reflect an interesting paradigm shift. Whereas, previously company law was primarily focused on preventing abuse, there is now a trend towards legislation that promote commerce and satisfy the needs of business. This means that the goal of economic efficiency is having...... an increasing influence on the framing of company legislation, such as the choice between mandatory or default rules. This article introduces the project "European Company Law and the choice of Regulatory Method" which is carried out in collaboration with the European Model Company Act Group. The project aims...... to analyze the appropriateness of different regulatory methods which are available to achieve the regulatory goals. ...
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Efficient response spectrum analysis of a reactor using Model Order Reduction
International Nuclear Information System (INIS)
Oh, Jin Ho; Choi, Jin Bok; Ryu, Jeong Soo
2012-01-01
A response spectrum analysis (RSA) has been widely used to evaluate the structural integrity of various structural components in the nuclear industry. However, solving the large and complex structural systems numerically using the RSA requires a considerable amount of computational resources and time. To overcome this problem, this paper proposes the RSA based on the model order reduction (MOR) technique achieved by applying a projection from a higher order to a lower order space using Krylov subspaces generated by the Arnoldi algorithm. The dynamic characteristics of the final reduced system are almost identical with those of the full system by matching the moments of the reduced system with those of the full system up to the required nth order. It is remarkably efficient in terms of computation time and does not require a global system. Numerical examples demonstrate that the proposed method saves computational costs effectively, and provides a reduced system framework that predicts the accurate responses of a global system
Jafar-Zanjani, Samad; Cheng, Jierong; Mosallaei, Hossein
2016-04-10
An efficient auxiliary differential equation method for incorporating 2D inhomogeneous dispersive impedance sheets in the finite-difference time-domain solver is presented. This unique proposed method can successfully solve optical problems of current interest involving 2D sheets. It eliminates the need for ultrafine meshing in the thickness direction, resulting in a significant reduction of computation time and memory requirements. We apply the method to characterize a novel broad-beam leaky-wave antenna created by cascading three sinusoidally modulated reactance surfaces and also to study the effect of curvature on the radiation characteristic of a conformal impedance sheet holographic antenna. Considerable improvement in the simulation time based on our technique in comparison with the traditional volumetric model is reported. Both applications are of great interest in the field of antennas and 2D sheets.
Analysis of the coupling efficiency of a tapered space receiver with a calculus mathematical model
Hu, Qinggui; Mu, Yining
2018-03-01
We establish a calculus mathematical model to study the coupling characteristics of tapered optical fibers in a space communications system, and obtained the coupling efficiency equation. Then, using MATLAB software, the solution was calculated. After this, the sample was produced by the mature flame-brush technique. The experiment was then performed, and the results were in accordance with the theoretical analysis. This shows that the theoretical analysis was correct and indicates that a tapered structure could improve its tolerance with misalignment. Project supported by The National Natural Science Foundation of China (grant no. 61275080); 2017 Jilin Province Science and Technology Development Plan-Science and Technology Innovation Fund for Small and Medium Enterprises (20170308029HJ); ‘thirteen five’ science and technology research project of the Department of Education of Jilin 2016 (16JK009).
Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2011-11-01
In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.
Efficient Analysis of Structures with Rotatable Elements Using Model Order Reduction
Directory of Open Access Journals (Sweden)
G. Fotyga
2016-04-01
Full Text Available This paper presents a novel full-wave technique which allows for a fast 3D finite element analysis of waveguide structures containing rotatable tuning elements of arbitrary shapes. Rotation of these elements changes the resonant frequencies of the structure, which can be used in the tuning process to obtain the S-characteristics desired for the device. For fast commutations of the response as the tuning elements are rotated, the 3D finite element method is supported by multilevel model-order reduction, orthogonal projection at the boundaries of macromodels and the operation called macromodels cloning. All the time-consuming steps are performed only once in the preparatory stage. In the tuning stage, only small parts of the domain are updated, by means of a special meshing technique. In effect, the tuning process is performed extremely rapidly. The results of the numerical experiments confirm the efficiency and validity of the proposed method.
International Nuclear Information System (INIS)
Lecuyer, Oskar; Bibas, Ruben
2012-01-01
In addition to the already present Climate and Energy package, the European Union (EU) plans to include a binding target to reduce energy consumption. We analyze the rationales the EU invokes to justify such an overlapping and develop a minimal common framework to study interactions arising from the combination of instruments reducing emissions, promoting renewable energy (RE) production and reducing energy demand through energy efficiency (EE) investments. We find that although all instruments tend to reduce GHG emissions and although a price on carbon tends also to give the right incentives for RE and EE, the combination of more than one instrument leads to significant antagonisms regarding major objectives of the policy package. The model allows to show in a single framework and to quantify the antagonistic effects of the joint promotion of RE and EE. We also show and quantify the effects of this joint promotion on ETS permit price, on wholesale market price and on energy production levels. (authors)
Energy Technology Data Exchange (ETDEWEB)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.
2017-04-11
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.
Loss Model and Efficiency Analysis of Tram Auxiliary Converter Based on a SiC Device
Directory of Open Access Journals (Sweden)
Hao Liu
2017-12-01
Full Text Available Currently, the auxiliary converter in the auxiliary power supply system of a modern tram adopts Si IGBT as its switching device and with the 1700 V/225 A SiC MOSFET module commercially available from Cree, an auxiliary converter using all SiC devices is now possible. A SiC auxiliary converter prototype is developed during this study. The author(s derive the loss calculation formula of the SiC auxiliary converter according to the system topology and principle and each part loss in this system can be calculated based on the device datasheet. Then, the static and dynamic characteristics of the SiC MOSFET module used in the system are tested, which aids in fully understanding the performance of the SiC devices and provides data support for the establishment of the PLECS loss simulation model. Additionally, according to the actual circuit parameters, the PLECS loss simulation model is set up. This simulation model can simulate the actual operating conditions of the auxiliary converter system and calculate the loss of each switching device. Finally, the loss of the SiC auxiliary converter prototype is measured and through comparison it is found that the loss calculation theory and PLECS loss simulation model is valuable. Furthermore, the thermal images of the system can prove the conclusion about loss distribution to some extent. Moreover, these two methods have the advantages of less variables and fast calculation for high power applications. The loss models may aid in optimizing the switching frequency and improving the efficiency of the system.
Liu, Nan; D'Aunno, Thomas
2012-04-01
To develop simple stylized models for evaluating the productivity and cost-efficiencies of different practice models to involve nurse practitioners (NPs) in primary care, and in particular to generate insights on what affects the performance of these models and how. The productivity of a practice model is defined as the maximum number of patients that can be accounted for by the model under a given timeliness-to-care requirement; cost-efficiency is measured by the corresponding annual cost per patient in that model. Appropriate queueing analysis is conducted to generate formulas and values for these two performance measures. Model parameters for the analysis are extracted from the previous literature and survey reports. Sensitivity analysis is conducted to investigate the model performance under different scenarios and to verify the robustness of findings. Employing an NP, whose salary is usually lower than a primary care physician, may not be cost-efficient, in particular when the NP's capacity is underutilized. Besides provider service rates, workload allocation among providers is one of the most important determinants for the cost-efficiency of a practice model involving NPs. Capacity pooling among providers could be a helpful strategy to improve efficiency in care delivery. The productivity and cost-efficiency of a practice model depend heavily on how providers organize their work and a variety of other factors related to the practice environment. Queueing theory provides useful tools to take into account these factors in making strategic decisions on staffing and panel size selection for a practice model. © Health Research and Educational Trust.
King, Michael W
2017-11-01
Despite the U.S. substantially outspending peer high income nations with almost 18% of GDP dedicated to health care, on any number of statistical measurements from life expectancy to birth rates to chronic disease, 1 the U.S. achieves inferior health outcomes. In short, Americans receive a very disappointing return on investment on their health care dollars, causing economic and social strain. 2 Accordingly, the debates rage on: what is the top driver of health care spending? Among the culprits: poor communication and coordination among disparate providers, paperwork required by payors and regulations, well-intentioned physicians overprescribing treatments, drugs and devices, outright fraud and abuse, and medical malpractice litigation. Fundamentally, what is the best way to reduce U.S. health care spending, while improving the patient experience of care in terms of quality and satisfaction, and driving better patient health outcomes? Mergers, partnerships, and consolidation in the health care industry, new care delivery models like Accountable Care Organizations and integrated care systems, bundled payments, information technology, innovation through new drugs and new medical devices, or some combination of the foregoing? More importantly, recent ambitious reform efforts fall short of a cohesive approach, leaving fundamental internal inconsistencies across divergent arms of the federal government, raising the issue of whether the U.S. health care system can drive sufficient efficiencies within the current health care and antitrust regulatory environments. While debate rages on Capitol Hill over "repeal and replace," only limited attention has been directed toward reforming the current "fee-for-service" model pursuant to which providers are paid for volume of care rather than quality or outcomes. Indeed, both the Patient Protection and Affordable Care Act ("ACA") 3 and proposals for its replacement focus primarily on the reach and cost of providing coverage for
An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring
Chen, R.; Sun, Y. Y.; Lei, Y.
2017-12-01
With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and
Brovelli, A.; Robinson, C. E.; Barry, D. A.; Gerhard, J.
2009-12-01
Enhanced reductive dechlorination is a viable technology for in situ remediation of chlorinated solvent DNAPL source areas. Although in recent years increased understanding of this technology has led to more rapid dechlorination rates, complete dechlorination can be hindered by unfavorable conditions. Hydrochloric acid produced from dechlorination and organic acids generated from electron donor fermentation can lead to significant groundwater acidification. Adverse pH conditions can inhibit the activity of dehalogenating microorganisms and thus slow or stall the remediation process. The extent of acidification likely to occur at a contaminated site depends on a number of factors including (1) the extent of dechlorination, (2) the pH-sensitivity of dechlorinating bacteria, and (3) the geochemical composition of the soil and water, in particular the soil’s natural buffering capacity. The substantial mass of solvents available for dechlorination when treating DNAPL source zones means that these applications are particularly susceptible to acidification. In this study a reactive transport biogeochemical model was developed to investigate the chemical and physical parameters that control the build-up of acidity and subsequent remediation efficiency. The model accounts for the site water chemistry, mineral precipitation and dissolution kinetics, electron donor fermentation, gas phase formation, competing electron-accepting processes (e.g., sulfate and iron reduction) and the sensitivity of microbial processes to pH. Confidence in the model was achieved by simulating a well-documented field study, for which the 2-D field scale model was able to reproduce long-term variations of pH, and the concurrent build up of reaction products. Sensitivity analyses indicated the groundwater flow velocity is able to reduce acidity build-up when the rate of advection is comparable or larger than the rate of dechlorination. The extent of pH change is highly dependent on the presence of
Analysis of financing efficiency of big data industry in Guizhou province based on DEA models
Li, Chenggang; Pan, Kang; Luo, Cong
2018-03-01
Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.