WorldWideScience

Sample records for modeling shows efficiencies

  1. Skeletal Muscle Differentiation on a Chip Shows Human Donor Mesoangioblasts' Efficiency in Restoring Dystrophin in a Duchenne Muscular Dystrophy Model.

    Science.gov (United States)

    Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F Saverio; Cossu, Giulio; Elvassore, Nicola

    2016-12-01

    : Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210-240 µm vs. 40-70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from

  2. Skeletal Muscle Differentiation on a Chip Shows Human Donor Mesoangioblasts’ Efficiency in Restoring Dystrophin in a Duchenne Muscular Dystrophy Model

    Science.gov (United States)

    Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F. Saverio; Cossu, Giulio

    2016-01-01

    Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210–240 µm vs. 40–70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. Significance This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes

  3. Bioavailability of particulate metal to zebra mussels: Biodynamic modelling shows that assimilation efficiencies are site-specific

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)

    2011-12-15

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.

  4. Bioavailability of particulate metal to zebra mussels: biodynamic modelling shows that assimilation efficiencies are site-specific.

    Science.gov (United States)

    Bourgeault, Adeline; Gourlay-Francé, Catherine; Priadi, Cindy; Ayrault, Sophie; Tusseau-Vuillemin, Marie-Hélène

    2011-12-01

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  6. Information Systems Efficiency Model

    Directory of Open Access Journals (Sweden)

    Milos Koch

    2017-07-01

    Full Text Available This contribution discusses the basic concept of creating a new model for the efficiency and effectiveness assessment of company information systems. The present trends in this field are taken into account, and the attributes are retained of measuring the optimal solutions for a company’s ICT (the implementation, functionality, service, innovations, safety, relationships, costs, etc.. The proposal of a new model of assessment comes from our experience with formerly implemented and employed methods, methods which we have modified in time and adapted to companies’ needs but also to the necessaries of our research that has been done through the ZEFIS portal. The most noteworthy of them is the HOS method that we have discussed in a number of forums. Its main feature is the fact that it respects the complexity of an information system in correlation with the balanced state of its individual parts.

  7. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  8. SPAR Model Structural Efficiencies

    Energy Technology Data Exchange (ETDEWEB)

    John Schroeder; Dan Henry

    2013-04-01

    The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches

  9. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  10. Gut Transcriptome Analysis Shows Different Food Utilization Efficiency by the Grasshopper Oedaleous asiaticus (Orthoptera: Acrididae).

    Science.gov (United States)

    Huang, Xunbing; McNeill, Mark Richard; Ma, Jingchuan; Qin, Xinghu; Tu, Xiongbing; Cao, Guangchun; Wang, Guangjun; Nong, Xiangqun; Zhang, Zehua

    2017-08-01

    Oedaleus asiaticus B. Bienko is a persistent pest occurring in north Asian grasslands. We found that O. asiaticus feeding on Stipa krylovii Roshev. had higher approximate digestibility (AD), efficiency of conversion of ingested food (ECI), and efficiency of conversion of digested food (ECD), compared with cohorts feeding on Leymus chinensis (Trin.) Tzvel, Artemisia frigida Willd., or Cleistogenes squarrosa (Trin.) Keng. Although this indicated high food utilization efficiency for S. krylovii, the physiological processes and molecular mechanisms underlying these biological observations are not well understood. Transcriptome analysis was used to examine how gene expression levels in O. asiaticus gut are altered by feeding on the four plant species. Nymphs (fifth-instar female) that fed on S. krylovii had the largest variation in gene expression profiles, with a total of 88 genes significantly upregulated compared with those feeding on the other three plants, mainly including nutrition digestive genes of protein, carbohydrate, and lipid digestion. GO and KEGG enrichment also showed that feeding S. krylovii could upregulate the nutrition digestion-related molecular function, biological process, and pathways. These changes in transcripts levels indicate that the physiological processes of activating nutrition digestive enzymes and metabolism pathways can well explain the high food utilization of S. krylovii by O. asiaticus. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. System with embedded drug release and nanoparticle degradation sensor showing efficient rifampicin delivery into macrophages.

    Science.gov (United States)

    Trousil, Jiří; Filippov, Sergey K; Hrubý, Martin; Mazel, Tomáš; Syrová, Zdeňka; Cmarko, Dušan; Svidenská, Silvie; Matějková, Jana; Kováčik, Lubomír; Porsch, Bedřich; Konefał, Rafał; Lund, Reidar; Nyström, Bo; Raška, Ivan; Štěpánek, Petr

    2017-01-01

    We have developed a biodegradable, biocompatible system for the delivery of the antituberculotic antibiotic rifampicin with a built-in drug release and nanoparticle degradation fluorescence sensor. Polymer nanoparticles based on poly(ethylene oxide) monomethyl ether-block-poly(ε-caprolactone) were noncovalently loaded with rifampicin, a combination that, to best of our knowledge, was not previously described in the literature, which showed significant benefits. The nanoparticles contain a Förster resonance energy transfer (FRET) system that allows real-time assessment of drug release not only in vitro, but also in living macrophages where the mycobacteria typically reside as hard-to-kill intracellular parasites. The fluorophore also enables in situ monitoring of the enzymatic nanoparticle degradation in the macrophages. We show that the nanoparticles are efficiently taken up by macrophages, where they are very quickly associated with the lysosomal compartment. After drug release, the nanoparticles in the cmacrophages are enzymatically degraded, with half-life 88±11 min. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    , that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...

  13. Modelling water uptake efficiency of root systems

    Science.gov (United States)

    Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea

    2016-04-01

    Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow

  14. Showing that the race model inequality is not violated

    DEFF Research Database (Denmark)

    Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul

    2012-01-01

    When participants are asked to respond in the same way to stimuli from different sources (e. g., auditory and visual), responses are often observed to be substantially faster when both stimuli are presented simultaneously (redundancy gain). Different models account for this effect, the two most...

  15. Finishing pigs that are divergent in feed efficiency show small differences in intestinal functionality and structure.

    Directory of Open Access Journals (Sweden)

    Barbara U Metzler-Zebeli

    Full Text Available Controversial information is available regarding the feed efficiency-related variation in intestinal size, structure and functionality in pigs. The present objective was therefore to investigate the differences in visceral organ size, intestinal morphology, mucosal enzyme activity, intestinal integrity and related gene expression in low and high RFI pigs which were reared at three different geographical locations (Austria, AT; Northern Ireland, NI; Republic of Ireland, ROI using similar protocols. Pigs (n = 369 were ranked for their RFI between days 42 and 91 postweaning and low and high RFI pigs (n = 16 from AT, n = 24 from NI, and n = 60 from ROI were selected. Pigs were sacrificed and sampled on ~day 110 of life. In general, RFI-related variation in intestinal size, structure and function was small. Some energy saving mechanisms and enhanced digestive and absorptive capacity were indicated in low versus high RFI pigs by shorter crypts, higher duodenal lactase and maltase activity and greater mucosal permeability (P < 0.05, but differences were mainly seen in pigs from AT and to a lesser degree in pigs from ROI. Additionally, low RFI pigs from AT had more goblet cells in duodenum but fewer in jejunum compared to high RFI pigs (P < 0.05. Together with the lower expression of TLR4 and TNFA in low versus high RFI pigs from AT and ROI (P < 0.05, these results might indicate differences in the innate immune response between low and high RFI pigs. Results demonstrated that the variation in the size of visceral organs and intestinal structure and functionality was greater between geographic location (local environmental factors than between RFI ranks of pigs. In conclusion, present results support previous findings that the intestinal size, structure and functionality do not significantly contribute to variation in RFI of pigs.

  16. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  17. Modeling Fuel Efficiency: MPG or GPHM?

    Science.gov (United States)

    Bartkovich, Kevin G.

    2013-01-01

    The standard for measuring fuel efficiency in the U.S. has been miles per gallon (mpg). However, the Environmental Protection Agency's (EPA) switch in rating fuel efficiency from miles per gallon to gallons per hundred miles with the 2013 model-year cars leads to interesting and relevant mathematics with real-world connections. By modeling…

  18. BOREAS TE-17 Production Efficiency Model Images

    Data.gov (United States)

    National Aeronautics and Space Administration — A BOREAS version of the Global Production Efficiency Model(www.inform.umd.edu/glopem) was developed by TE-17 to generate maps of gross and net primary production,...

  19. Modelling fluidized catalytic cracking unit stripper efficiency

    OpenAIRE

    García-Dopico M.; García A.

    2015-01-01

    This paper presents our modelling of a FCCU stripper, following our earlier research. This model can measure stripper efficiency against the most important variables: pressure, temperature, residence time and steam flow. Few models in the literature model the stripper and usually they do against only one variable. Nevertheless, there is general agreement on the importance of the stripper in the overall process, and the fact that there are few models maybe i...

  20. Geometrical efficiency in computerized tomography: generalized model

    International Nuclear Information System (INIS)

    Costa, P.R.; Robilotta, C.C.

    1992-01-01

    A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)

  1. Efficient experimental designs for sigmoidal growth models

    OpenAIRE

    Dette, Holger; Pepelyshev, Andrey

    2005-01-01

    For the Weibull- and Richards-regression model robust designs are determined by maximizing a minimum of D- or D1-efficiencies, taken over a certain range of the non-linear parameters. It is demonstrated that the derived designs yield a satisfactory solution of the optimal design problem for this type of model in the sense that these designs are efficient and robust with respect to misspecification of the unknown parameters. Moreover, the designs can also be used for testing the postulated for...

  2. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  3. Efficient Iris Localization via Optimization Model

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2017-01-01

    Full Text Available Iris localization is one of the most important processes in iris recognition. Because of different kinds of noises in iris image, the localization result may be wrong. Besides this, localization process is time-consuming. To solve these problems, this paper develops an efficient iris localization algorithm via optimization model. Firstly, the localization problem is modeled by an optimization model. Then SIFT feature is selected to represent the characteristic information of iris outer boundary and eyelid for localization. And SDM (Supervised Descent Method algorithm is employed to solve the final points of outer boundary and eyelids. Finally, IRLS (Iterative Reweighted Least-Square is used to obtain the parameters of outer boundary and upper and lower eyelids. Experimental result indicates that the proposed algorithm is efficient and effective.

  4. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments

    Directory of Open Access Journals (Sweden)

    Gyöngyi Munkácsy

    2016-01-01

    Full Text Available No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal–Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E−06. Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E−04. There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  5. An Efficiency Model For Hydrogen Production In A Pressurized Electrolyzer

    Energy Technology Data Exchange (ETDEWEB)

    Smoglie, Cecilia; Lauretta, Ricardo

    2010-09-15

    The use of Hydrogen as clean fuel at a world wide scale requires the development of simple, safe and efficient production and storage technologies. In this work, a methodology is proposed to produce Hydrogen and Oxygen in a self pressurized electrolyzer connected to separate containers that store each of these gases. A mathematical model for Hydrogen production efficiency is proposed to evaluate how such efficiency is affected by parasitic currents in the electrolytic solution. Experimental set-up and results for an electrolyzer are also presented. Comparison of empirical and analytical results shows good agreement.

  6. Characterization and mutational analysis of a nicotinamide mononucleotide deamidase from Agrobacterium tumefaciens showing high thermal stability and catalytic efficiency.

    Directory of Open Access Journals (Sweden)

    Ana Belén Martínez-Moñino

    Full Text Available NAD+ has emerged as a crucial element in both bioenergetic and signaling pathways since it acts as a key regulator of cellular and organismal homeostasis. Among the enzymes involved in its recycling, nicotinamide mononucleotide (NMN deamidase is one of the key players in the bacterial pyridine nucleotide cycle, where it catalyzes the conversion of NMN into nicotinic acid mononucleotide (NaMN, which is later converted to NAD+ in the Preiss-Handler pathway. The biochemical characteristics of bacterial NMN deamidases have been poorly studied, although they have been investigated in some firmicutes, gamma-proteobacteria and actinobacteria. In this study, we present the first characterization of an NMN deamidase from an alphaproteobacterium, Agrobacterium tumefaciens (AtCinA. The enzyme was active over a broad pH range, with an optimum at pH 7.5. Moreover, the enzyme was quite stable at neutral pH, maintaining 55% of its activity after 14 days. Surprisingly, AtCinA showed the highest optimal (80°C and melting (85°C temperatures described for an NMN deamidase. The above described characteristics, together with its high catalytic efficiency, make AtCinA a promising biocatalyst for the production of pure NaMN. In addition, six mutants (C32A, S48A, Y58F, Y58A, T105A and R145A were designed to study their involvement in substrate binding, and two (S31A and K63A to determine their contribution to the catalysis. However, only four mutants (C32A, S48A Y58F and T105A showed activity, although with reduced catalytic efficiency. These results, combined with a thermal and structural analysis, reinforce the Ser/Lys catalytic dyad mechanism as the most plausible among those proposed.

  7. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  8. Chiral crystal of a C2v-symmetric 1,3-diazaaulene derivative showing efficient optical second harmonic generation

    KAUST Repository

    Ma, Xiaohua

    2011-03-01

    Achiral nonlinear optical (NLO) chromophores 1,3-diazaazulene derivatives, 2-(4â€-aminophenyl)-6-nitro-1,3-diazaazulene (APNA) and 2-(4â€-N,N-diphenylaminophenyl)-6-nitro-1,3-diazaazulene (DPAPNA), were synthesized with high yield. Despite the moderate static first hyperpolarizabilities (β0) for both APNA [(136 ± 5) à - 10-30 esu] and DPAPNA [(263 ± 20) à - 10-30 esu], only APNA crystal shows a powder efficiency of second harmonic generation (SHG) of 23 times that of urea. It is shown that the APNA crystallization driven cooperatively by the strong H-bonding network and the dipolar electrostatic interactions falls into the noncentrosymmetric P2 12121 space group, and that the helical supramolecular assembly is solely responsible for the efficient SHG response. To the contrary, the DPAPNA crystal with centrosymmetric P-1 space group is packed with antiparalleling dimmers, and is therefore completely SHG-inactive. 1,3-Diazaazulene derivatives are suggested to be potent building blocks for SHG-active chiral crystals, which are advantageous in high thermal stability, excellent near-infrared transparency and high degree of designing flexibility. © 2011 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys, 2011 Optical crystals based on 1,3-diazaazulene derivatives are reported as the first example of organic nonlinear optical crystal whose second harmonic generation activity is found to originate solely from the chirality of their helical supramolecular orientation. The strong H-bond network forming between adjacent choromophores is found to act cooperatively with dipolar electrostatic interactions in driving the chiral crystallization of this material. Copyright © 2011 Wiley Periodicals, Inc.

  9. Satellite-based terrestrial production efficiency modeling

    Directory of Open Access Journals (Sweden)

    Obersteiner Michael

    2009-09-01

    Full Text Available Abstract Production efficiency models (PEMs are based on the theory of light use efficiency (LUE which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP monitoring. The objectives of this review are as follows: 1 to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS identified in the literature; 2 to review each model to determine potential improvements to the general PEM methodology; 3 to review the related literature on satellite-based gross primary productivity (GPP and NPP modeling for additional possibilities for improvement; and 4 based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra; there is an urgent need for

  10. Satellite-based terrestrial production efficiency modeling.

    Science.gov (United States)

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-09-18

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research.This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling.Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass

  11. Two tropical conifers show strong growth and water-use efficiency responses to altered CO2 concentration.

    Science.gov (United States)

    Dalling, James W; Cernusak, Lucas A; Winter, Klaus; Aranda, Jorge; Garcia, Milton; Virgo, Aurelio; Cheesman, Alexander W; Baresch, Andres; Jaramillo, Carlos; Turner, Benjamin L

    2016-11-01

    Conifers dominated wet lowland tropical forests 100 million years ago (MYA). With a few exceptions in the Podocarpaceae and Araucariaceae, conifers are now absent from this biome. This shift to angiosperm dominance also coincided with a large decline in atmospheric CO 2 concentration (c a ). We compared growth and physiological performance of two lowland tropical angiosperms and conifers at c a levels representing pre-industrial (280 ppm), ambient (400 ppm) and Eocene (800 ppm) conditions to explore how differences in c a affect the growth and water-use efficiency (WUE) of seedlings from these groups. Two conifers (Araucaria heterophylla and Podocarpus guatemalensis) and two angiosperm trees (Tabebuia rosea and Chrysophyllum cainito) were grown in climate-controlled glasshouses in Panama. Growth, photosynthetic rates, nutrient uptake, and nutrient use and water-use efficiencies were measured. Podocarpus seedlings showed a stronger (66 %) increase in relative growth rate with increasing c a relative to Araucaria (19 %) and the angiosperms (no growth enhancement). The response of Podocarpus is consistent with expectations for species with conservative growth traits and low mesophyll diffusion conductance. While previous work has shown limited stomatal response of conifers to c a , we found that the two conifers had significantly greater increases in leaf and whole-plant WUE than the angiosperms, reflecting increased photosynthetic rate and reduced stomatal conductance. Foliar nitrogen isotope ratios (δ 15 N) and soil nitrate concentrations indicated a preference in Podocarpus for ammonium over nitrate, which may impact nitrogen uptake relative to nitrate assimilators under high c a SIGNIFICANCE: Podocarps colonized tropical forests after angiosperms achieved dominance and are now restricted to infertile soils. Although limited to a single species, our data suggest that higher c a may have been favourable for podocarp colonization of tropical South America 60

  12. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  13. Interactions to the fifth trophic level: secondary and tertiary parasitoid wasps show extraordinary efficiency in utilizing host resources

    NARCIS (Netherlands)

    Harvey, J.A.; Wagenaar, R.; Bezemer, T.M.

    2009-01-01

    1. Parasitoid wasps are highly efficient organisms at utilizing and assimilating limited resources from their hosts. This study explores interactions over three trophic levels, from the third (primary parasitoid) to the fourth (secondary parasitoid) and terminating in the fifth (tertiary

  14. Efficient Parallel Algorithms for Landscape Evolution Modelling

    Science.gov (United States)

    Moresi, L. N.; Mather, B.; Beucher, R.

    2017-12-01

    Landscape erosion and the deposition of sediments by river systems are strongly controlled bytopography, rainfall patterns, and the susceptibility of the basement to the action ofrunning water. It is well understood that each of these processes depends on the other, for example:topography results from active tectonic processes; deformation, metamorphosis andexhumation alter the competence of the basement; rainfall patterns depend on topography;uplift and subsidence in response to tectonic stress can be amplified by erosionand sediment deposition. We typically gain understanding of such coupled systems through forward models which capture theessential interactions of the various components and attempt parameterise those parts of the individual systemthat are unresolvable at the scale of the interaction. Here we address the problem of predicting erosion and deposition rates at a continental scalewith a resolution of tens to hundreds of metres in a dynamic, Lagrangian framework. This isa typical requirement for a code to interface with a mantle / lithosphere dynamics model anddemands an efficient, unstructured, parallel implementation. We address this through a very general algorithm that treats all parts of the landscape evolution equationsin sparse-matrix form including those for stream-flow accumulation, dam-filling and catchment determination. This givesus considerable flexibility in developing unstructured, parallel code, and in creating a modular packagethat can be configured by users to work at different temporal and spatial scales, but is also has potential advantagesin treating the non-linear parts of the problem in a general manner.

  15. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  16. Energy Efficiency Model for Induction Furnace

    Science.gov (United States)

    Dey, Asit Kr

    2018-01-01

    In this paper, a system of a solar induction furnace unit was design to find out a new solution for the existing AC power consuming heating process through Supervisory control and data acquisition system. This unit can be connected directly to the DC system without any internal conversion inside the device. The performance of the new system solution is compared with the existing one in terms of power consumption and losses. This work also investigated energy save, system improvement, process control model in a foundry induction furnace heating framework corresponding to PV solar power supply. The results are analysed for long run in terms of saving energy and integrated process system. The data acquisition system base solar foundry plant is an extremely multifaceted system that can be run over an almost innumerable range of operating conditions, each characterized by specific energy consumption. Determining ideal operating conditions is a key challenge that requires the involvement of the latest automation technologies, each one contributing to allow not only the acquisition, processing, storage, retrieval and visualization of data, but also the implementation of automatic control strategies that can expand the achievement envelope in terms of melting process, safety and energy efficiency.

  17. Inertia may limit efficiency of slow flapping flight, but mayflies show a strategy for reducing the power requirements of loiter

    International Nuclear Information System (INIS)

    Usherwood, James R

    2009-01-01

    Predictions from aerodynamic theory often match biological observations very poorly. Many insects and several bird species habitually hover, frequently flying at low advance ratios. Taking helicopter-based aerodynamic theory, wings functioning predominantly for hovering, even for quite small insects, should operate at low angles of attack. However, insect wings operate at very high angles of attack during hovering; reduction in angle of attack should result in considerable energetic savings. Here, I consider the possibility that selection of kinematics is constrained from being aerodynamically optimal due to the inertial power requirements of flapping. Potential increases in aerodynamic efficiency with lower angles of attack during hovering may be outweighed by increases in inertial power due to the associated increases in flapping frequency. For simple hovering, traditional rotary-winged helicopter-like micro air vehicles would be more efficient than their flapping biomimetic counterparts. However, flapping may confer advantages in terms of top speed and manoeuvrability. If flapping-winged micro air vehicles are required to hover or loiter more efficiently, dragonflies and mayflies suggest biomimetic solutions

  18. Dendritic polyglycerols with oligoamine shells show low toxicity and high siRNA transfection efficiency in vitro.

    Science.gov (United States)

    Fischer, Wiebke; Calderón, Marcelo; Schulz, Andrea; Andreou, Ioanna; Weber, Martin; Haag, Rainer

    2010-10-20

    RNA interference provides great opportunities for treating diseases from genetic disorders, infection, and cancer. The successful application of small interference RNA (siRNA) in cells with high transfection efficiency and low cytotoxicity is, however, a major challenge in gene-mediated therapy. Several pH-responsive core shell architectures have been designed that contain a nitrogen shell motif and a polyglycerol core, which has been prepared by a two-step protocol involving the activation of primary and secondary hydroxyl groups by phenyl chloroformate and amine substitution. Each polymer was analyzed by particle size and ζ potential measurements, whereas the respective polyplex formation was determined by ethidium bromide displacement assay, atomic force microscopy (AFM), and surface charge analysis. The in vitro gene silencing properties of the different polymers were evaluated by using a human epithelial carcinoma cell (HeLaS3) line with different proteins (Lamin, CDC2, MAPK2). Polyplexes yielded similar knockdown efficiencies as HiPerFect controls, with comparably low cytotoxicity. Therefore, these efficient and highly biocompatible dendritic polyamines are promising candidates for siRNA delivery in vivo.

  19. The temperate Burkholderia phage AP3 of the Peduovirinae shows efficient antimicrobial activity against B. cenocepacia of the IIIA lineage.

    Science.gov (United States)

    Roszniowski, Bartosz; Latka, Agnieszka; Maciejewska, Barbara; Vandenheuvel, Dieter; Olszak, Tomasz; Briers, Yves; Holt, Giles S; Valvano, Miguel A; Lavigne, Rob; Smith, Darren L; Drulis-Kawa, Zuzanna

    2017-02-01

    Burkholderia phage AP3 (vB_BceM_AP3) is a temperate virus of the Myoviridae and the Peduovirinae subfamily (P2likevirus genus). This phage specifically infects multidrug-resistant clinical Burkholderia cenocepacia lineage IIIA strains commonly isolated from cystic fibrosis patients. AP3 exhibits high pairwise nucleotide identity (61.7 %) to Burkholderia phage KS5, specific to the same B. cenocepacia host, and has 46.7-49.5 % identity to phages infecting other species of Burkholderia. The lysis cassette of these related phages has a similar organization (putative antiholin, putative holin, endolysin, and spanins) and shows 29-98 % homology between specific lysis genes, in contrast to Enterobacteria phage P2, the hallmark phage of this genus. The AP3 and KS5 lysis genes have conserved locations and high amino acid sequence similarity. The AP3 bacteriophage particles remain infective up to 5 h at pH 4-10 and are stable at 60 °C for 30 min, but are sensitive to chloroform, with no remaining infective particles after 24 h of treatment. AP3 lysogeny can occur by stable genomic integration and by pseudo-lysogeny. The lysogenic bacterial mutants did not exhibit any significant changes in virulence compared to wild-type host strain when tested in the Galleria mellonella moth wax model. Moreover, AP3 treatment of larvae infected with B. cenocepacia revealed a significant increase (P phage is a promising potent agent against bacteria belonging to the most common B. cenocepacia IIIA lineage strains.

  20. Rifalazil and derivative compounds show potent efficacy in a mouse model of H. pylori colonization.

    Science.gov (United States)

    Rothstein, David M; Mullin, Steve; Sirokman, Klari; Söndergaard, Karen L; Johnson, Starrla; Gwathmey, Judith K; van Duzer, John; Murphy, Christopher K

    2008-08-01

    The rifamycin rifalazil (RFZ), and derivatives (NCEs) were efficacious in a mouse model of Helicobacter pylori colonization. Select NCEs were more active in vitro and showed greater efficacy than RFZ. A systemic component contributes to efficacy.

  1. Adolescents with greater mental toughness show higher sleep efficiency, more deep sleep and fewer awakenings after sleep onset.

    Science.gov (United States)

    Brand, Serge; Gerber, Markus; Kalak, Nadeem; Kirov, Roumen; Lemola, Sakari; Clough, Peter J; Pühse, Uwe; Holsboer-Trachsler, Edith

    2014-01-01

    Mental toughness (MT) is understood as the display of confidence, commitment, challenge, and control. Mental toughness is associated with resilience against stress. However, research has not yet focused on the relation between MT and objective sleep. The aim of the present study was therefore to explore the extent to which greater MT is associated with objectively assessed sleep among adolescents. A total of 92 adolescents (35% females; mean age, 18.92 years) completed the Mental Toughness Questionnaire. Participants were split into groups of high and low mental toughness. Objective sleep was recorded via sleep electroencephalograms and subjective sleep was assessed via a questionnaire. Compared with participants with low MT, participants with high MT had higher sleep efficiency, a lower number of awakenings after sleep onset, less light sleep, and more deep sleep. They also reported lower daytime sleepiness. Adolescents reporting higher MT also had objectively better sleep, as recorded via sleep electroencephalograms. A bidirectional association between MT and sleep seems likely; therefore, among adolescence, improving sleep should increase MT, and improving MT should increase sleep. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  2. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  3. A novel halophilic lipase, LipBL, showing high efficiency in the production of eicosapentaenoic acid (EPA.

    Directory of Open Access Journals (Sweden)

    Dolores Pérez

    Full Text Available BACKGROUND: Among extremophiles, halophiles are defined as microorganisms adapted to live and thrive in diverse extreme saline environments. These extremophilic microorganisms constitute the source of a number of hydrolases with great biotechnological applications. The interest to use extremozymes from halophiles in industrial applications is their resistance to organic solvents and extreme temperatures. Marinobacter lipolyticus SM19 is a moderately halophilic bacterium, isolated previously from a saline habitat in South Spain, showing lipolytic activity. METHODS AND FINDINGS: A lipolytic enzyme from the halophilic bacterium Marinobacter lipolyticus SM19 was isolated. This enzyme, designated LipBL, was expressed in Escherichia coli. LipBL is a protein of 404 amino acids with a molecular mass of 45.3 kDa and high identity to class C β-lactamases. LipBL was purified and biochemically characterized. The temperature for its maximal activity was 80°C and the pH optimum determined at 25°C was 7.0, showing optimal activity without sodium chloride, while maintaining 20% activity in a wide range of NaCl concentrations. This enzyme exhibited high activity against short-medium length acyl chain substrates, although it also hydrolyzes olive oil and fish oil. The fish oil hydrolysis using LipBL results in an enrichment of free eicosapentaenoic acid (EPA, but not docosahexaenoic acid (DHA, relative to its levels present in fish oil. For improving the stability and to be used in industrial processes LipBL was immobilized in different supports. The immobilized derivatives CNBr-activated Sepharose were highly selective towards the release of EPA versus DHA. The enzyme is also active towards different chiral and prochiral esters. Exposure of LipBL to buffer-solvent mixtures showed that the enzyme had remarkable activity and stability in all organic solvents tested. CONCLUSIONS: In this study we isolated, purified, biochemically characterized and immobilized a

  4. A novel halophilic lipase, LipBL, showing high efficiency in the production of eicosapentaenoic acid (EPA).

    Science.gov (United States)

    Pérez, Dolores; Martín, Sara; Fernández-Lorente, Gloria; Filice, Marco; Guisán, José Manuel; Ventosa, Antonio; García, María Teresa; Mellado, Encarnación

    2011-01-01

    Among extremophiles, halophiles are defined as microorganisms adapted to live and thrive in diverse extreme saline environments. These extremophilic microorganisms constitute the source of a number of hydrolases with great biotechnological applications. The interest to use extremozymes from halophiles in industrial applications is their resistance to organic solvents and extreme temperatures. Marinobacter lipolyticus SM19 is a moderately halophilic bacterium, isolated previously from a saline habitat in South Spain, showing lipolytic activity. A lipolytic enzyme from the halophilic bacterium Marinobacter lipolyticus SM19 was isolated. This enzyme, designated LipBL, was expressed in Escherichia coli. LipBL is a protein of 404 amino acids with a molecular mass of 45.3 kDa and high identity to class C β-lactamases. LipBL was purified and biochemically characterized. The temperature for its maximal activity was 80°C and the pH optimum determined at 25°C was 7.0, showing optimal activity without sodium chloride, while maintaining 20% activity in a wide range of NaCl concentrations. This enzyme exhibited high activity against short-medium length acyl chain substrates, although it also hydrolyzes olive oil and fish oil. The fish oil hydrolysis using LipBL results in an enrichment of free eicosapentaenoic acid (EPA), but not docosahexaenoic acid (DHA), relative to its levels present in fish oil. For improving the stability and to be used in industrial processes LipBL was immobilized in different supports. The immobilized derivatives CNBr-activated Sepharose were highly selective towards the release of EPA versus DHA. The enzyme is also active towards different chiral and prochiral esters. Exposure of LipBL to buffer-solvent mixtures showed that the enzyme had remarkable activity and stability in all organic solvents tested. In this study we isolated, purified, biochemically characterized and immobilized a lipolytic enzyme from a halophilic bacterium M. lipolyticus

  5. Effective and efficient model clone detection

    DEFF Research Database (Denmark)

    Störrle, Harald

    2015-01-01

    Code clones are a major source of software defects. Thus, it is likely that model clones (i.e., duplicate fragments of models) have a significant negative impact on model quality, and thus, on any software created based on those models, irrespective of whether the software is generated fully...... automatically (“MDD-style”) or hand-crafted following the blueprint defined by the model (“MBSD-style”). Unfortunately, however, model clones are much less well studied than code clones. In this paper, we present a clone detection algorithm for UML domain models. Our approach covers a much greater variety...... of model types than existing approaches while providing high clone detection rates at high speed....

  6. Business Models, transparency and efficient stock price formation

    DEFF Research Database (Denmark)

    Nielsen, Christian; Vali, Edward; Hvidberg, Rene

    and the lack of growth of competitors. This is a problem, because the company is deprived of having its own direct influence on its share price, which often leads to hasty short-term decisions in order to meet the expectations of the market and to benefit its shareholders in the short term. On the basis...... of this, our hypothesis is that if it is possible to improve, simplify and define the way a company communicates its business model to the market, then it must be possible for the company to create a more efficient price formation of its share. To begin with, we decided to investigate whether transparency...... the operational and tactical strategies complement each other. This brings us to the following definition of a business model: A business model is a representation of the company's concept. The concept shows in what way the company is trying to establish a unique identity in the market in comparison to its...

  7. Strong and Nonspecific Synergistic Antibacterial Efficiency of Antibiotics Combined with Silver Nanoparticles at Very Low Concentrations Showing No Cytotoxic Effect.

    Science.gov (United States)

    Panáček, Aleš; Smékalová, Monika; Kilianová, Martina; Prucek, Robert; Bogdanová, Kateřina; Večeřová, Renata; Kolář, Milan; Havrdová, Markéta; Płaza, Grażyna Anna; Chojniak, Joanna; Zbořil, Radek; Kvítek, Libor

    2015-12-28

    The resistance of bacteria towards traditional antibiotics currently constitutes one of the most important health care issues with serious negative impacts in practice. Overcoming this issue can be achieved by using antibacterial agents with multimode antibacterial action. Silver nano-particles (AgNPs) are one of the well-known antibacterial substances showing such multimode antibacterial action. Therefore, AgNPs are suitable candidates for use in combinations with traditional antibiotics in order to improve their antibacterial action. In this work, a systematic study quantifying the synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs against Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus was performed. Employing the microdilution method as more suitable and reliable than the disc diffusion method, strong synergistic effects were shown for all tested antibiotics combined with AgNPs at very low concentrations of both antibiotics and AgNPs. No trends were observed for synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs, indicating non-specific synergistic effects. Moreover, a very low amount of silver is needed for effective antibacterial action of the antibiotics, which represents an important finding for potential medical applications due to the negligible cytotoxic effect of AgNPs towards human cells at these concentration levels.

  8. Strong and Nonspecific Synergistic Antibacterial Efficiency of Antibiotics Combined with Silver Nanoparticles at Very Low Concentrations Showing No Cytotoxic Effect

    Directory of Open Access Journals (Sweden)

    Aleš Panáček

    2015-12-01

    Full Text Available The resistance of bacteria towards traditional antibiotics currently constitutes one of the most important health care issues with serious negative impacts in practice. Overcoming this issue can be achieved by using antibacterial agents with multimode antibacterial action. Silver nano-particles (AgNPs are one of the well-known antibacterial substances showing such multimode antibacterial action. Therefore, AgNPs are suitable candidates for use in combinations with traditional antibiotics in order to improve their antibacterial action. In this work, a systematic study quantifying the synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs against Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus was performed. Employing the microdilution method as more suitable and reliable than the disc diffusion method, strong synergistic effects were shown for all tested antibiotics combined with AgNPs at very low concentrations of both antibiotics and AgNPs. No trends were observed for synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs, indicating non-specific synergistic effects. Moreover, a very low amount of silver is needed for effective antibacterial action of the antibiotics, which represents an important finding for potential medical applications due to the negligible cytotoxic effect of AgNPs towards human cells at these concentration levels.

  9. Efficient solvers for coupled models in respiratory mechanics.

    Science.gov (United States)

    Verdugo, Francesc; Roth, Christian J; Yoshihara, Lena; Wall, Wolfgang A

    2017-02-01

    We present efficient preconditioners for one of the most physiologically relevant pulmonary models currently available. Our underlying motivation is to enable the efficient simulation of such a lung model on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments. The system of linear equations to be solved using the proposed preconditioners is essentially the monolithic system arising in fluid-structure interaction (FSI) extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. The numerical examples show that the resulting preconditioners approach the optimal performance of multigrid methods, even though the lung model is a complex multiphysics problem. Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformations is modeled with additional algebraic constraints. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Science.gov (United States)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the

  11. Assessment of Energy Efficient and Model Based Control

    Science.gov (United States)

    2017-06-15

    ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig Lennon...originator. ARL-TR-8042 ● JUNE 2017 US Army Research Laboratory Assessment of Energy -Efficient and Model- Based Control by Craig...

  12. Information, complexity and efficiency: The automobile model

    Energy Technology Data Exchange (ETDEWEB)

    Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  13. A physics-explicit model of bacterial conjugation shows the stabilizing role of the conjugative junction

    OpenAIRE

    Pastuszak, Jakub; Waclaw, Bartlomiej

    2017-01-01

    Conjugation is a process in which bacteria exchange DNA through a physical connection (conjugative junction) between mating cells. Despite its significance for processes such as the spread of antibiotic resistance, the role of physical forces in conjugation is poorly understood. Here we use computer models to show that the conjugative junction not only serves as a link to transfer the DNA but it also mechanically stabilises the mating pair which significantly increases the conjugation rate. W...

  14. A model for efficient management of electrical assets

    International Nuclear Information System (INIS)

    Alonso Guerreiro, A.

    2008-01-01

    At the same time that energy demand grows faster than the investments in electrical installations, the older capacity is reaching the end of its useful life. The need of running all those capacity without interruptions and an efficient maintenance of its assets, are the two current key points for power generation, transmission and distribution systems. This paper tries to show the reader a model of management which makes possible an effective management of assets with a strict control cost, and which includes those key points, centred at predictive techniques, involving all the departments of the organization and which goes further on considering the maintenance like a simple reparation or substitution of broken down units. Therefore, it becomes precise a model with three basic lines: supply guarantee, quality service and competitively, in order to allow the companies to reach the current demands which characterize the power supply. (Author) 5 refs

  15. Microarray profiling shows distinct differences between primary tumors and commonly used preclinical models in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Wang, Weining; Iyer, N. Gopalakrishna; Tay, Hsien Ts’ung; Wu, Yonghui; Lim, Tony K. H.; Zheng, Lin; Song, In Chin; Kwoh, Chee Keong; Huynh, Hung; Tan, Patrick O. B.; Chow, Pierce K. H.

    2015-01-01

    Despite advances in therapeutics, outcomes for hepatocellular carcinoma (HCC) remain poor and there is an urgent need for efficacious systemic therapy. Unfortunately, drugs that are successful in preclinical studies often fail in the clinical setting, and we hypothesize that this is due to functional differences between primary tumors and commonly used preclinical models. In this study, we attempt to answer this question by comparing tumor morphology and gene expression profiles between primary tumors, xenografts and HCC cell lines. Hep G2 cell lines and tumor cells from patient tumor explants were subcutaneously (ectopically) injected into the flank and orthotopically into liver parenchyma of Mus Musculus SCID mice. The mice were euthanized after two weeks. RNA was extracted from the tumors, and gene expression profiling was performed using the Gene Chip Human Genome U133 Plus 2.0. Principal component analyses (PCA) and construction of dendrograms were conducted using Partek genomics suite. PCA showed that the commonly used HepG2 cell line model and its xenograft counterparts were vastly different from all fresh primary tumors. Expression profiles of primary tumors were also significantly divergent from their counterpart patient-derived xenograft (PDX) models, regardless of the site of implantation. Xenografts from the same primary tumors were more likely to cluster together regardless of site of implantation, although heat maps showed distinct differences in gene expression profiles between orthotopic and ectopic models. The data presented here challenges the utility of routinely used preclinical models. Models using HepG2 were vastly different from primary tumors and PDXs, suggesting that this is not clinically representative. Surprisingly, site of implantation (orthotopic versus ectopic) resulted in limited impact on gene expression profiles, and in both scenarios xenografts differed significantly from the original primary tumors, challenging the long

  16. Model to evaluate the technical efficiency of university units

    Directory of Open Access Journals (Sweden)

    Marlon Soliman

    2014-06-01

    Full Text Available In higher education institutions, the technical efficiency has been measured by several indicators that, when used separately, does not lead to an effective conclusion about the administrative reality of these. Therefore, this paper proposes a model to evaluate the technical efficiency of university units of a higher education institution (HEI from the perspectives of Teaching, Research and Extension. The conception of the model was performed according to the pressumptions of Data Envelopment Analysis (DEA, CCR model – product oriented, from the identification of relevant variables for the addressed context. The model was applied to evaluate the efficiency of nine academic units of the Federal University of Santa Maria (UFSM, obtaining as a result the efficiency of each unit as well as recommendations for the units considered inefficient. At the end of this study, it was verified that it is possible to measure the efficiency of various units and consequently establish goals for improvement based on the methodology used.

  17. Models for efficient integration of solar energy

    DEFF Research Database (Denmark)

    Bacher, Peder

    the available flexibility in the system. In the present thesis methods related to operation of solar energy systems and for optimal energy use in buildings are presented. Two approaches for forecasting of solar power based on numerical weather predictions (NWPs) are presented, they are applied to forecast...... the power output from PV and solar thermal collector systems. The first approach is based on a developed statistical clear-sky model, which is used for estimating the clear-sky output solely based on observations of the output. This enables local effects such as shading from trees to be taken into account....... The second approach to solar power forecasting is based on conditional parametric modelling. It is well suited for forecasting of solar thermal power, since is it can be make non-linear in the inputs. The approach is also extended to a probabilistic solar power forecasting model. The statistical clear...

  18. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....

  19. Efficient modelling of a modular multilevel converter

    DEFF Research Database (Denmark)

    El-Khatib, Walid Ziad; Holbøll, Joachim; Rasmussen, Tonny Wederberg

    2013-01-01

    Looking at the near future, we see that offshore wind penetration into the electrical grid will continue increasing rapidly. Until very recently, the trend has been to place the offshore wind farms close to shore within the reach for transmission using HVAC cables but for larger distances HVDC...... are calculated for the converter. Time-domain simulations on a MMC HVDC test system are performed in the PSCAD/EMTDC software environment based on the new model. The results demonstrate that the modeled MMC-HVDC system with or without converter transformer is able to operate under specific fault conditions....

  20. Maintaining formal models of living guidelines efficiently

    NARCIS (Netherlands)

    Seyfang, Andreas; Martínez-Salvador, Begoña; Serban, Radu; Wittenberg, Jolanda; Miksch, Silvia; Marcos, Mar; Ten Teije, Annette; Rosenbrand, Kitty C J G M

    2007-01-01

    Translating clinical guidelines into formal models is beneficial in many ways, but expensive. The progress in medical knowledge requires clinical guidelines to be updated at relatively short intervals, leading to the term living guideline. This causes potentially expensive, frequent updates of the

  1. Porcine Esophageal Submucosal Gland Culture Model Shows Capacity for Proliferation and DifferentiationSummary

    Directory of Open Access Journals (Sweden)

    Richard J. von Furstenberg

    2017-11-01

    Full Text Available Background & Aims: Although cells comprising esophageal submucosal glands (ESMGs represent a potential progenitor cell niche, new models are needed to understand their capacity to proliferate and differentiate. By histologic appearance, ESMGs have been associated with both overlying normal squamous epithelium and columnar epithelium. Our aim was to assess ESMG proliferation and differentiation in a 3-dimensional culture model. Methods: We evaluated proliferation in human ESMGs from normal and diseased tissue by proliferating cell nuclear antigen immunohistochemistry. Next, we compared 5-ethynyl-2′-deoxyuridine labeling in porcine ESMGs in vivo before and after esophageal injury with a novel in vitro porcine organoid ESMG model. Microarray analysis of ESMGs in culture was compared with squamous epithelium and fresh ESMGs. Results: Marked proliferation was observed in human ESMGs of diseased tissue. This activated ESMG state was recapitulated after esophageal injury in an in vivo porcine model, ESMGs assumed a ductal appearance with increased proliferation compared with control. Isolated and cultured porcine ESMGs produced buds with actively cycling cells and passaged to form epidermal growth factor–dependent spheroids. These spheroids were highly proliferative and were passaged multiple times. Two phenotypes of spheroids were identified: solid squamous (P63+ and hollow/ductal (cytokeratin 7+. Microarray analysis showed spheroids to be distinct from parent ESMGs and enriched for columnar transcripts. Conclusions: Our results suggest that the activated ESMG state, seen in both human disease and our porcine model, may provide a source of cells to repopulate damaged epithelium in a normal manner (squamous or abnormally (columnar epithelium. This culture model will allow the evaluation of factors that drive ESMGs in the regeneration of injured epithelium. The raw microarray data have been uploaded to the National Center for

  2. Porcine Esophageal Submucosal Gland Culture Model Shows Capacity for Proliferation and Differentiation.

    Science.gov (United States)

    von Furstenberg, Richard J; Li, Joy; Stolarchuk, Christina; Feder, Rachel; Campbell, Alexa; Kruger, Leandi; Gonzalez, Liara M; Blikslager, Anthony T; Cardona, Diana M; McCall, Shannon J; Henning, Susan J; Garman, Katherine S

    2017-11-01

    Although cells comprising esophageal submucosal glands (ESMGs) represent a potential progenitor cell niche, new models are needed to understand their capacity to proliferate and differentiate. By histologic appearance, ESMGs have been associated with both overlying normal squamous epithelium and columnar epithelium. Our aim was to assess ESMG proliferation and differentiation in a 3-dimensional culture model. We evaluated proliferation in human ESMGs from normal and diseased tissue by proliferating cell nuclear antigen immunohistochemistry. Next, we compared 5-ethynyl-2'-deoxyuridine labeling in porcine ESMGs in vivo before and after esophageal injury with a novel in vitro porcine organoid ESMG model. Microarray analysis of ESMGs in culture was compared with squamous epithelium and fresh ESMGs. Marked proliferation was observed in human ESMGs of diseased tissue. This activated ESMG state was recapitulated after esophageal injury in an in vivo porcine model, ESMGs assumed a ductal appearance with increased proliferation compared with control. Isolated and cultured porcine ESMGs produced buds with actively cycling cells and passaged to form epidermal growth factor-dependent spheroids. These spheroids were highly proliferative and were passaged multiple times. Two phenotypes of spheroids were identified: solid squamous (P63+) and hollow/ductal (cytokeratin 7+). Microarray analysis showed spheroids to be distinct from parent ESMGs and enriched for columnar transcripts. Our results suggest that the activated ESMG state, seen in both human disease and our porcine model, may provide a source of cells to repopulate damaged epithelium in a normal manner (squamous) or abnormally (columnar epithelium). This culture model will allow the evaluation of factors that drive ESMGs in the regeneration of injured epithelium. The raw microarray data have been uploaded to the National Center for Biotechnology Information Gene Expression Omnibus (accession number: GSE100543).

  3. Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.

    Directory of Open Access Journals (Sweden)

    Estefanía de Munck

    Full Text Available Amyotrophic lateral sclerosis (ALS is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA, a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.

  4. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  5. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Science.gov (United States)

    Forti, Isabela Rodrigues Nogueira; Young, Robert John

    2016-01-01

    In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male) and Brazil (400 female and 400 male) to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (PBrazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour) it is hard to modify, hide or disguise, and it is highly polymorphic.

  6. Histidine decarboxylase knockout mice, a genetic model of Tourette syndrome, show repetitive grooming after induced fear.

    Science.gov (United States)

    Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher

    2015-05-19

    Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. MTO1-deficient mouse model mirrors the human phenotype showing complex I defect and cardiomyopathy.

    Directory of Open Access Journals (Sweden)

    Lore Becker

    Full Text Available Recently, mutations in the mitochondrial translation optimization factor 1 gene (MTO1 were identified as causative in children with hypertrophic cardiomyopathy, lactic acidosis and respiratory chain defect. Here, we describe an MTO1-deficient mouse model generated by gene trap mutagenesis that mirrors the human phenotype remarkably well. As in patients, the most prominent signs and symptoms were cardiovascular and included bradycardia and cardiomyopathy. In addition, the mutant mice showed a marked worsening of arrhythmias during induction and reversal of anaesthesia. The detailed morphological and biochemical workup of murine hearts indicated that the myocardial damage was due to complex I deficiency and mitochondrial dysfunction. In contrast, neurological examination was largely normal in Mto1-deficient mice. A translational consequence of this mouse model may be to caution against anaesthesia-related cardiac arrhythmias which may be fatal in patients.

  8. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  9. An Efficient Virtual Trachea Deformation Model

    Directory of Open Access Journals (Sweden)

    Cui Tong

    2016-01-01

    Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.

  10. Energy Efficient Wireless Sensor Network Modelling Based on Complex Networks

    OpenAIRE

    Xiao, Lin; Wu, Fahui; Yang, Dingcheng; Zhang, Tiankui; Zhu, Xiaoya

    2016-01-01

    The power consumption and energy efficiency of wireless sensor network are the significant problems in Internet of Things network. In this paper, we consider the network topology optimization based on complex network theory to solve the energy efficiency problem of WSN. We propose the energy efficient model of WSN according to the basic principle of small world from complex networks. Small world network has clustering features that are similar to that of the rules of the network but also has ...

  11. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  12. Visual modeling shows that avian host parents use multiple visual cues in rejecting parasitic eggs.

    Science.gov (United States)

    Spottiswoode, Claire N; Stevens, Martin

    2010-05-11

    One of the most striking outcomes of coevolution between species is egg mimicry by brood parasitic birds, resulting from rejection behavior by discriminating host parents. Yet, how exactly does a host detect a parasitic egg? Brood parasitism and egg rejection behavior provide a model system for exploring the relative importance of different visual cues used in a behavioral task. Although hosts are discriminating, we do not know exactly what cues they use, and to answer this it is crucial to account for the receiver's visual perception. Color, luminance ("perceived lightness") and pattern information have never been simultaneously quantified and experimentally tested through a bird's eye. The cuckoo finch Anomalospiza imberbis and its hosts show spectacular polymorphisms in egg appearance, providing a good opportunity for investigating visual discrimination owing to the large range of patterns and colors involved. Here we combine field experiments in Africa with modeling of avian color vision and pattern discrimination to identify the specific visual cues used by hosts in making rejection decisions. We found that disparity between host and foreign eggs in both color and several aspects of pattern (dispersion, principal marking size, and variability in marking size) were important predictors of rejection, especially color. These cues correspond exactly to the principal differences between host and parasitic eggs, showing that hosts use the most reliable available cues in making rejection decisions, and select for parasitic eggs that are increasingly mimetic in a range of visual attributes.

  13. Transchromosomic cell model of Down syndrome shows aberrant migration, adhesion and proteome response to extracellular matrix

    Directory of Open Access Journals (Sweden)

    Cotter Finbarr E

    2009-08-01

    Full Text Available Abstract Background Down syndrome (DS, caused by trisomy of human chromosome 21 (HSA21, is the most common genetic birth defect. Congenital heart defects (CHD are seen in 40% of DS children, and >50% of all atrioventricular canal defects in infancy are caused by trisomy 21, but the causative genes remain unknown. Results Here we show that aberrant adhesion and proliferation of DS cells can be reproduced using a transchromosomic model of DS (mouse fibroblasts bearing supernumerary HSA21. We also demonstrate a deacrease of cell migration in transchromosomic cells independently of their adhesion properties. We show that cell-autonomous proteome response to the presence of Collagen VI in extracellular matrix is strongly affected by trisomy 21. Conclusion This set of experiments establishes a new model system for genetic dissection of the specific HSA21 gene-overdose contributions to aberrant cell migration, adhesion, proliferation and specific proteome response to collagen VI, cellular phenotypes linked to the pathogenesis of CHD.

  14. Improving hospital efficiency: a process model of organizational change commitments.

    Science.gov (United States)

    Nigam, Amit; Huising, Ruthanne; Golden, Brian R

    2014-02-01

    Improving hospital efficiency is a critical goal for managers and policy makers. We draw on participant observation of the perioperative coaching program in seven Ontario hospitals to develop knowledge of the process by which the content of change initiatives to increase hospital efficiency is defined. The coaching program was a change initiative involving the use of external facilitators with the goal of increasing perioperative efficiency. Focusing on the role of subjective understandings in shaping initiatives to improve efficiency, we show that physicians, nurses, administrators, and external facilitators all have differing frames of the problems that limit efficiency, and propose different changes that could enhance efficiency. Dynamics of strategic and contested framing ultimately shaped hospital change commitments. We build on work identifying factors that enhance the success of change efforts to improve hospital efficiency, highlighting the importance of subjective understandings and the politics of meaning-making in defining what hospitals change.

  15. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  16. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    Science.gov (United States)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  17. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  18. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    Science.gov (United States)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  19. Nanotoxicity modelling and removal efficiencies of ZnONP.

    Science.gov (United States)

    Fikirdeşici Ergen, Şeyda; Üçüncü Tunca, Esra

    2018-01-02

    In this paper the aim is to investigate the toxic effect of zinc oxide nanoparticles (ZnONPs) and is to analyze the removal of ZnONP in aqueous medium by the consortium consisted of Daphnia magna and Lemna minor. Three separate test groups are formed: L. minor ([Formula: see text]), D. magna ([Formula: see text]), and L. minor + D. magna ([Formula: see text]) and all these test groups are exposed to three different nanoparticle concentrations ([Formula: see text]). Time-dependent, concentration-dependent, and group-dependent removal efficiencies are statistically compared by non-parametric Mann-Whitney U test and statistically significant differences are observed. The optimum removal values are observed at the highest concentration [Formula: see text] for [Formula: see text], [Formula: see text] for [Formula: see text]and [Formula: see text] for [Formula: see text] and realized at [Formula: see text] for all test groups [Formula: see text]. There is no statistically significant differences in removal at low concentrations [Formula: see text] in terms of groups but [Formula: see text] test groups are more efficient than [Formula: see text] test groups in removal of ZnONP, at [Formula: see text] concentration. Regression analysis is also performed for all prediction models. Different models are tested and it is seen that cubic models show the highest predicted values (R 2 ). In toxicity models, R 2 values are obtained at (0.892, 0.997) interval. A simple solution-phase method is used to synthesize ZnO nanoparticles. Dynamic Light Scattering and X-Ray Diffraction (XRD) are used to detect the particle size of synthesized ZnO nanoparticles.

  20. Efficient Accommodation of Local Minima in Watershed Model Calibration

    National Research Council Canada - National Science Library

    Skahill, Brian E; Doherty, John

    2006-01-01

    .... Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use...

  1. Efficiency Of Different Teaching Models In Teaching Of Frisbee Ultimate

    Directory of Open Access Journals (Sweden)

    Žuffová Zuzana

    2015-05-01

    Full Text Available The aim of the study was to verify the efficiency of two frisbee ultimate teaching models at 8-year grammar schools relative to age. In the experimental group was used a game based model (Teaching Games for Understanding and in the control group the traditional model based on teaching techniques. 6 groups of female students took part in experiment: experimental group 1 (n=10, age=11.6, experimental group 2 (n=12, age=13.8, experimental group 3 (n=14, age =15.8, control group 1 (n=11, age =11.7, control group 2 (n=10, age =13.8 and control group 3 (n=9, age =15.8. Efficiency of the teaching models was evaluated based of game performance and special knowledge results. Game performance was evaluated by the method of game performance assessment based on GPAI (Game Performance Assessment Instrument through video record. To verify level of knowledge, we used a knowledge test, which consisted of questions related to the rules and tactics knowledge of frisbee ultimate. To perform statistical evaluation Mann-Whitney U-test was used. Game performance assessment and knowledge level indicated higher efficiency of TGfU in general, but mostly statistically insignificant. Experimental groups 1 and 2 were significantly better in the indicator that evaluates tactical aspect of game performance - decision making (p<0.05. Experimental group 3 was better in the indicator that evaluates skill execution - disc catching. The results showed that the students of the classes taught by game based model reached partially better game performance in general. Experimental groups achieved from 79.17 % to 80 % of correct answers relating to the rules and from 75 % to 87.5 % of correct answers relating to the tactical knowledge in the knowledge test. Control groups achieved from 57.69 % to 72.22 % of correct answers relating to the rules and from 51.92 % to 72.22 % of correct answers relating to the tactical knowledge in the knowledge test.

  2. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  3. Semiparametric Efficient Adaptive Estimation of the PTTGARCH model

    OpenAIRE

    Ciccarelli, Nicola

    2016-01-01

    Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator tha...

  4. An efficient and simplified model for forecasting using SRM

    International Nuclear Information System (INIS)

    Asif, H.M.; Hyat, M.F.; Ahmad, T.

    2014-01-01

    Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines), provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory), SRM (Structural Risk Minimization )Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression) for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building. (author)

  5. An Efficient and Simplified Model for Forecasting using SRM

    Directory of Open Access Journals (Sweden)

    Hafiz Muhammad Shahzad Asif

    2014-01-01

    Full Text Available Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines, provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory, SRM (Structural Risk Minimization Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building

  6. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  7. Etoposide incorporated into camel milk phospholipids liposomes shows increased activity against fibrosarcoma in a mouse model.

    Science.gov (United States)

    Maswadeh, Hamzah M; Aljarbou, Ahmad N; Alorainy, Mohammed S; Alsharidah, Mansour S; Khan, Masood A

    2015-01-01

    Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS). Anticancer drug etoposide (ETP) was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP) and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes) and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes). The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs.

  8. Etoposide Incorporated into Camel Milk Phospholipids Liposomes Shows Increased Activity against Fibrosarcoma in a Mouse Model

    Directory of Open Access Journals (Sweden)

    Hamzah M. Maswadeh

    2015-01-01

    Full Text Available Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS. Anticancer drug etoposide (ETP was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes. The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs.

  9. Phenolic Acids from Wheat Show Different Absorption Profiles in Plasma: A Model Experiment with Catheterized Pigs

    DEFF Research Database (Denmark)

    Nørskov, Natalja; Hedemann, Mette Skou; Theil, Peter Kappel

    2013-01-01

    consumed. Benzoic acid derivatives showed low concentration in the plasma (diets. The exception was p-hydroxybenzoic acid, with a plasma concentration (4 ± 0.4 μM), much higher than the other plant phenolic acids, likely because it is an intermediate in the phenolic acid metabolism......The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study....... It was concluded that plant phenolic acids undergo extensive interconversion in the colon and that their absorption profiles reflected their low bioavailability in the plant matrix....

  10. Management Index Systems and Energy Efficiency Diagnosis Model for Power Plant: Cases in China

    Directory of Open Access Journals (Sweden)

    Jing-Min Wang

    2016-01-01

    Full Text Available In recent years, the energy efficiency of thermal power plant largely contributes to that of the industry. A thorough understanding of influencing factors, as well as the establishment of scientific and comprehensive diagnosis model, plays a key role in the operational efficiency and competitiveness for the thermal power plant. Referring to domestic and abroad researches towards energy efficiency management, based on Cloud model and data envelopment analysis (DEA model, a qualitative and quantitative index system and a comprehensive diagnostic model (CDM are construed. To testify rationality and usability of CDM, case studies of large-scaled Chinese thermal power plants have been conducted. In this case, CDM excavates such qualitative factors as technology, management, and so forth. The results shows that, compared with conventional model, which only considered production running parameters, the CDM bears better adaption to reality. It can provide entities with efficient instruments for energy efficiency diagnosis.

  11. Ebola Virus Makona Shows Reduced Lethality in an Immune-deficient Mouse Model.

    Science.gov (United States)

    Smither, Sophie J; Eastaugh, Lin; Ngugi, Sarah; O'Brien, Lyn; Phelps, Amanda; Steward, Jackie; Lever, Mark Stephen

    2016-10-15

    Ebola virus Makona (EBOV-Makona; from the 2013-2016 West Africa outbreak) shows decreased virulence in an immune-deficient mouse model, compared with a strain from 1976. Unlike other filoviruses tested, EBOV-Makona may be slightly more virulent by the aerosol route than by the injected route, as 2 mice died following aerosol exposure, compared with no mortality among mice that received intraperitoneal injection of equivalent or higher doses. Although most mice did not succumb to infection, the detection of an immunoglobulin G antibody response along with observed clinical signs suggest that the mice were infected but able to clear the infection and recover. We hypothesize that this may be due to the growth rates and kinetics of the virus, which appear slower than that for other filoviruses and consequently give more time for an immune response that results in clearance of the virus. In this instance, the immune-deficient mouse model is unlikely to be appropriate for testing medical countermeasures against this EBOV-Makona stock but may provide insight into pathogenesis and the immune response to virus. © Crown copyright 2016.

  12. Energetics and efficiency of a molecular motor model

    OpenAIRE

    Fogedby, Hans C.; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor is addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is...

  13. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  14. OEDGE modeling of plasma contamination efficiency of Ar puffing from different divertor locations in EAST

    Science.gov (United States)

    Pengfei, ZHANG; Ling, ZHANG; Zhenwei, WU; Zong, XU; Wei, GAO; Liang, WANG; Qingquan, YANG; Jichan, XU; Jianbin, LIU; Hao, QU; Yong, LIU; Juan, HUANG; Chengrui, WU; Yumei, HOU; Zhao, JIN; J, D. ELDER; Houyang, GUO

    2018-04-01

    Modeling with OEDGE was carried out to assess the initial and long-term plasma contamination efficiency of Ar puffing from different divertor locations, i.e. the inner divertor, the outer divertor and the dome, in the EAST superconducting tokamak for typical ohmic plasma conditions. It was found that the initial Ar contamination efficiency is dependent on the local plasma conditions at the different gas puff locations. However, it quickly approaches a similar steady state value for Ar recycling efficiency >0.9. OEDGE modeling shows that the final equilibrium Ar contamination efficiency is significantly lower for the more closed lower divertor than that for the upper divertor.

  15. The reading efficiency model: an extension of the componential model of reading.

    Science.gov (United States)

    Høien-Tengesdal, Ingjerd; Høien, Torleiv

    2012-01-01

    The purpose of the present study was twofold: First, the authors investigated if an extended version of the component model of reading (CMR; Model 2), including decoding rate and oral vocabulary comprehension, accounted for more of the variance in reading comprehension than the commonly used measures of the cognitive factors in the CMR. Second, the authors investigated the fitness of a new model, titled the reading efficiency model (REM), which deviates from earlier models regarding how reading is defined. In the study, 780 Norwegian students from Grades 6 and 10 were recruited. Here, hierarchical regression analyses showed that the extended model did not account for more of the variance in reading comprehension than the traditional CMR model (Model 1). In the second part of the study the authors used structural equation modeling (SEM) to explore the REM. The results showed that the REM explained an overall larger amount of variance in reading ability, compared to Model 1 and Model 2. This result is probably the result of the new definition of reading applied in the REM. The authors believe their model will more fully reflects students' differentiated reading skills by including reading fluency in the definition of reading.

  16. Modeling adaptation of carbon use efficiency in microbial communities

    Directory of Open Access Journals (Sweden)

    Steven D Allison

    2014-10-01

    Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.

  17. Model-based and model-free “plug-and-play” building energy efficient control

    International Nuclear Information System (INIS)

    Baldi, Simone; Michailidis, Iakovos; Ravanis, Christos; Kosmatopoulos, Elias B.

    2015-01-01

    Highlights: • “Plug-and-play” Building Optimization and Control (BOC) driven by building data. • Ability to handle the large-scale and complex nature of the BOC problem. • Adaptation to learn the optimal BOC policy when no building model is available. • Comparisons with rule-based and advanced BOC strategies. • Simulation and real-life experiments in a ten-office building. - Abstract: Considerable research efforts in Building Optimization and Control (BOC) have been directed toward the development of “plug-and-play” BOC systems that can achieve energy efficiency without compromising thermal comfort and without the need of qualified personnel engaged in a tedious and time-consuming manual fine-tuning phase. In this paper, we report on how a recently introduced Parametrized Cognitive Adaptive Optimization – abbreviated as PCAO – can be used toward the design of both model-based and model-free “plug-and-play” BOC systems, with minimum human effort required to accomplish the design. In the model-based case, PCAO assesses the performance of its control strategy via a simulation model of the building dynamics; in the model-free case, PCAO optimizes its control strategy without relying on any model of the building dynamics. Extensive simulation and real-life experiments performed on a 10-office building demonstrate the effectiveness of the PCAO–BOC system in providing significant energy efficiency and improved thermal comfort. The mechanisms embedded within PCAO render it capable of automatically and quickly learning an efficient BOC strategy either in the presence of complex nonlinear simulation models of the building dynamics (model-based) or when no model for the building dynamics is available (model-free). Comparative studies with alternative state-of-the-art BOC systems show the effectiveness of the PCAO–BOC solution

  18. Atovaquone Nanosuspensions Show Excellent Therapeutic Effect in a New Murine Model of Reactivated Toxoplasmosis

    Science.gov (United States)

    Schöler, Nadja; Krause, Karsten; Kayser, Oliver; Müller, Rainer H.; Borner, Klaus; Hahn, Helmut; Liesenfeld, Oliver

    2001-01-01

    Immunocompromised patients are at risk of developing toxoplasma encephalitis (TE). Standard therapy regimens (including sulfadiazine plus pyrimethamine) are hampered by severe side effects. While atovaquone has potent in vitro activity against Toxoplasma gondii, it is poorly absorbed after oral administration and shows poor therapeutic efficacy against TE. To overcome the low absorption of atovaquone, we prepared atovaquone nanosuspensions (ANSs) for intravenous (i.v.) administration. At concentrations higher than 1.0 μg/ml, ANS did not exert cytotoxicity and was as effective as free atovaquone (i.e., atovaquone suspended in medium) against T. gondii in freshly isolated peritoneal macrophages. In a new murine model of TE that closely mimics reactivated toxoplasmosis in immunocompromised hosts, using mice with a targeted mutation in the gene encoding the interferon consensus sequence binding protein, i.v.-administered ANS doses of 10.0 mg/kg of body weight protected the animals against development of TE and death. Atovaquone was detectable in the sera, brains, livers, and lungs of mice by high-performance liquid chromatography. Development of TE and mortality in mice treated with 1.0- or 0.1-mg/kg i.v. doses of ANS did not differ from that in mice treated orally with 100 mg of atovaquone/kg. In conclusion, i.v. ANSs may prove to be an effective treatment alternative for patients with TE. PMID:11353624

  19. New azole derivatives showing antimicrobial effects and their mechanism of antifungal activity by molecular modeling studies.

    Science.gov (United States)

    Doğan, İnci Selin; Saraç, Selma; Sari, Suat; Kart, Didem; Eşsiz Gökhan, Şebnem; Vural, İmran; Dalkara, Sevim

    2017-04-21

    Azole antifungals are potent inhibitors of fungal lanosterol 14α demethylase (CYP51) and have been used for eradication of systemic candidiasis clinically. Herein we report the design, synthesis, and biological evaluation of a series of 1-phenyl/1-(4-chlorophenyl)-2-(1H-imidazol-1-yl)ethanol esters. Many of these derivatives showed fungal growth inhibition at very low concentrations. Minimal inhibition concentration (MIC) value of 15 was 0.125 μg/mL against Candida albicans. Additionally, some of our compounds, such as 19 (MIC: 0.25 μg/mL), were potent against resistant C. glabrata, a fungal strain less susceptible to some first-line antifungal drugs. We confirmed their antifungal efficacy by antibiofilm test and their safety against human monocytes by cytotoxicity assay. To rationalize their mechanism of action, we performed computational analysis utilizing molecular docking and dynamics simulations on the C. albicans and C. glabrata CYP51 (CACYP51 and CGCYP51) homology models we built. Leu130 and T131 emerged as possible key residues for inhibition of CGCYP51 by 19. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Modeling mud flocculation using variable collision and breakup efficiencies

    Science.gov (United States)

    Strom, K.; Keyvani, A.

    2013-12-01

    Solution of the Winterwerp (1998) floc growth and breakup equation yields time dependent median floc size as an outcome of collision driven floc growth and shear induced floc breakage. The formulation is quite nice in that it is an ODE that yields fast solution for median floc size and can be incorporated into sediment transport models. The Winterwerp (1998) floc size equation was used to model floc growth and breakup data from laboratory experiments conducted under both constant and variable turbulent shear rate (Keyvani 2013). The data showed that floc growth rate starts out very high and then reduces with size to asymptotically approach an equilibrium size. In modeling the data, the Winterwerp (1998) model and the Son and Hsu (2008) variant were found to be able to capture the initial fast growth phase and the equilibrium state, but were not able to well capture the slow growing phase. This resulted in flocs reaching the equilibrium state in the models much faster than the experimental data. The objective of this work was to improve the ability of the general Winterwerp (1998) formulation to better capture the slow growth phase and more accurately predict the time to equilibrium. To do this, a full parameter sensitivity analysis was conducted using the Winterwerp (1998) model. Several modifications were tested, including the variable fractal dimension and yield strength extensions of Son and Hsu (2008, 2009). The best match with the in-house data, and data from the literature, was achieved using floc collision and breakup efficiency coefficients that decrease with floc size. The net result of the decrease in both of these coefficients is that floc growth slows without modification to the equilibrium size. Inclusion of these new functions allows for substantial improvement in modeling the growth phase of flocs in both steady and variable turbulence conditions. The improvement is particularly noticeable when modeling continual growth in a decaying turbulence field

  1. Environmental efficiency analysis of power industry in China based on an entropy SBM model

    International Nuclear Information System (INIS)

    Zhou, Yan; Xing, Xinpeng; Fang, Kuangnan; Liang, Dapeng; Xu, Chunlin

    2013-01-01

    In order to assess the environmental efficiency of power industry in China, this paper first proposes a new non-radial DEA approach by integrating the entropy weight and the SBM model. This will improve the assessment reliability and reasonableness. Using the model, this study then evaluates the environmental efficiency of the Chinese power industry at the provincial level during 2005–2010. The results show a marked difference in environmental efficiency of the power industry among Chinese provinces. Although the annual, average, environmental efficiency level fluctuates, there is an increasing trend. The Tobit regression analysis reveals the innovation ability of enterprises, the proportion of electricity generated by coal-fired plants and the generation capacity have a significantly positive effect on environmental efficiency. However the waste fees levied on waste discharge and investment in industrial pollutant treatment are negatively associated with environmental efficiency. - Highlights: ► We assess the environmental efficiency of power industry in China by E-SBM model. ► Environmental efficiency of power industry is different among provinces. ► Efficiency stays at a higher level in the eastern and the western area. ► Proportion of coal-fired plants has a positive effect on the efficiency. ► Waste fees and the investment have a negative effect on the efficiency

  2. Evaluation of discrete modeling efficiency of asynchronous electric machines

    OpenAIRE

    Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna

    2011-01-01

    In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.

  3. Energy technologies and energy efficiency in economic modelling

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological...... technological development. This paper examines the effect on aggregate energy efficiency of using technological models to describe a number of specific technologies and of incorporating these models in an economic model. Different effects from the technology representation are illustrated. Vintage effects...... illustrates the dependence of average efficiencies and productivity on capacity utilisation rates. In the long run regulation induced by environmental policies are also very important for the improvement of aggregate energy efficiency in the energy supply sector. A Danish policy to increase the share...

  4. Showing a model's eye movements in examples does not improve learning of problem-solving tasks

    NARCIS (Netherlands)

    van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara

    2016-01-01

    Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an

  5. Estimation of the efficiency of Japanese hospitals using a dynamic and network data envelopment analysis model.

    Science.gov (United States)

    Kawaguchi, Hiroyuki; Tone, Kaoru; Tsutsui, Miki

    2014-06-01

    The purpose of this study was to perform an interim evaluation of the policy effect of the current reform of Japan's municipal hospitals. We focused on efficiency improvements both within hospitals and within two separate internal hospital organizations. Hospitals have two heterogeneous internal organizations: the medical examination division and administration division. The administration division carries out business management and the medical-examination division provides medical care services. We employed a dynamic-network data envelopment analysis model (DN model) to perform the evaluation. The model makes it possible to simultaneously estimate both the efficiencies of separate organizations and the dynamic changes of the efficiencies. This study is the first empirical application of the DN model in the healthcare field. Results showed that the average overall efficiency obtained with the DN model was 0.854 for 2007. The dynamic change in efficiency scores from 2007 to 2009 was slightly lower. The average efficiency score was 0.862 for 2007 and 0.860 for 2009. The average estimated efficiency of the administration division decreased from 0.867 for 2007 to 0.8508 for 2009. In contrast, the average efficiency of the medical-examination division increased from 0.858 for 2007 to 0.870 for 2009. We were unable to find any significant improvement in efficiency despite the reform policy. Thus, there are no positive policy effects despite the increased financial support from the central government.

  6. Testing DEA Models of Efficiency in Norwegian Psychiatric Outpatient Clinics

    OpenAIRE

    Kittelsen, Sverre A.C.; Magnussen, Jon

    2009-01-01

    While measures of output in mental health care are even harder to find than in other health care activities, some indicators are available. In modelling productive efficiency the problem is to select the output variables that best reflect the use of resources, in the sense that these variables have a significant impact on measures of efficiency. The paper analyses cross-sectional data on the psychiatric outpatient clinics of Norway using the Data Envelopment Analysis (DEA) non-parametric effi...

  7. Modeling technical efficiency of inshore fishery using data envelopment analysis

    Science.gov (United States)

    Rahman, Rahayu; Zahid, Zalina; Khairi, Siti Shaliza Mohd; Hussin, Siti Aida Sheikh

    2016-10-01

    Fishery industry contributes significantly to the economy of Malaysia. This study utilized Data Envelopment Analysis application in estimating the technical efficiency of fishery in Terengganu, a state on the eastern coast of Peninsular Malaysia, based on multiple output, i.e. total fish landing and income of fishermen with six inputs, i.e. engine power, vessel size, number of trips, number of workers, cost and operation distance. The data were collected by survey conducted between November and December 2014. The decision making units (DMUs) involved 100 fishermen from 10 fishery areas. The result showed that the technical efficiency in Season I (dry season) and Season II (rainy season) were 90.2% and 66.7% respectively. About 27% of the fishermen were rated to be efficient during Season I, meanwhile only 13% of the fishermen achieved full efficiency 100% during Season II. The results also found out that there was a significance difference in the efficiency performance between the fishery areas.

  8. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  9. Cross-Layer Modeling Framework for Energy-Efficient Resilience

    Science.gov (United States)

    2014-04-01

    Kevin Skadron##, Gu-Yeon Wei+ * IBM T. J. Watson Research Center, Yorktown Heights, NY ** IBM Austin Research Laboratory, Austin, TX +Dept. of...Qute model developed at IBM Research [3]. The first two are both developed around basic analytical formalisms based on Amdahl’s Law. Qute is an...Modeling Strategy Figure 1 depicts the integrated, cross-layer system modeling concept as pursued in the IBM -led project titled: “Efficient

  10. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  11. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    NARCIS (Netherlands)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; Asseng, Senthold; Baranowski, Piotr; Basso, Bruno; Bodin, Per; Buis, Samuel; Cammarano, Davide; Deligios, Paola; Destain, Marie France; Dumont, Benjamin; Ewert, Frank; Ferrise, Roberto; François, Louis; Gaiser, Thomas; Hlavinka, Petr; Jacquemin, Ingrid; Kersebaum, Kurt Christian; Kollas, Chris; Krzyszczak, Jaromir; Lorite, Ignacio J.; Minet, Julien; Minguez, M.I.; Montesino, Manuel; Moriondo, Marco; Müller, Christoph; Nendel, Claas; Öztürk, Isik; Perego, Alessia; Rodríguez, Alfredo; Ruane, Alex C.; Ruget, Françoise; Sanna, Mattia; Semenov, Mikhail A.; Slawinski, Cezary; Stratonovitch, Pierre; Supit, Iwan; Waha, Katharina; Wang, Enli; Wu, Lianhai; Zhao, Zhigan; Rötter, Reimund P.

    2018-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in

  12. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    Czech Academy of Sciences Publication Activity Database

    Fronzek, S.; Pirttioja, N. K.; Carter, T. R.; Bindi, M.; Hoffmann, H.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, Miroslav; Acutis, M.; Asseng, S.; Baranowski, P.; Basso, B.; Bodin, P.; Buis, S.; Cammarano, D.; Deligios, P.; Destain, M. F.; Dumont, B.; Ewert, F.; Ferrise, R.; Francois, L.; Gaiser, T.; Hlavinka, Petr; Jacquemin, I.; Kersebaum, K. C.; Kollas, C.; Krzyszczak, J.; Lorite, I. J.; Minet, J.; Ines Minguez, M.; Montesino, M.; Moriondo, M.; Mueller, C.; Nendel, C.; Öztürk, I.; Perego, A.; Rodriguez, A.; Ruane, A. C.; Ruget, F.; Sanna, M.; Semenov, M. A.; Slawinski, C.; Stratonovitch, P.; Supit, I.; Waha, K.; Wang, E.; Wu, L.; Zhao, Z.; Rötter, R.

    2018-01-01

    Roč. 159, jan (2018), s. 209-224 ISSN 0308-521X Keywords : climate-change * crop models * probabilistic assessment * simulating impacts * british catchments * uncertainty * europe * productivity * calibration * adaptation * Classification * Climate change * Crop model * Ensemble * Sensitivity analysis * Wheat Impact factor: 2.571, year: 2016

  13. Predictive Modeling of Influenza Shows the Promise of Applied Evolutionary Biology.

    Science.gov (United States)

    Morris, Dylan H; Gostic, Katelyn M; Pompei, Simone; Bedford, Trevor; Łuksza, Marta; Neher, Richard A; Grenfell, Bryan T; Lässig, Michael; McCauley, John W

    2018-02-01

    Seasonal influenza is controlled through vaccination campaigns. Evolution of influenza virus antigens means that vaccines must be updated to match novel strains, and vaccine effectiveness depends on the ability of scientists to predict nearly a year in advance which influenza variants will dominate in upcoming seasons. In this review, we highlight a promising new surveillance tool: predictive models. Based on data-sharing and close collaboration between the World Health Organization and academic scientists, these models use surveillance data to make quantitative predictions regarding influenza evolution. Predictive models demonstrate the potential of applied evolutionary biology to improve public health and disease control. We review the state of influenza predictive modeling and discuss next steps and recommendations to ensure that these models deliver upon their considerable biomedical promise. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Energy Efficient Wireless Sensor Network Modelling Based on Complex Networks

    Directory of Open Access Journals (Sweden)

    Lin Xiao

    2016-01-01

    Full Text Available The power consumption and energy efficiency of wireless sensor network are the significant problems in Internet of Things network. In this paper, we consider the network topology optimization based on complex network theory to solve the energy efficiency problem of WSN. We propose the energy efficient model of WSN according to the basic principle of small world from complex networks. Small world network has clustering features that are similar to that of the rules of the network but also has similarity to random networks of small average path length. It can be utilized to optimize the energy efficiency of the whole network. Optimal number of multiple sink nodes of the WSN topology is proposed for optimizing energy efficiency. Then, the hierarchical clustering analysis is applied to implement this clustering of the sensor nodes and pick up the sink nodes from the sensor nodes as the clustering head. Meanwhile, the update method is proposed to determine the sink node when the death of certain sink node happened which can cause the paralysis of network. Simulation results verify the energy efficiency of the proposed model and validate the updating of the sink nodes to ensure the normal operation of the WSN.

  15. A ranking efficiency unit by restrictions using DEA models

    Science.gov (United States)

    Arsad, Roslah; Abdullah, Mohammad Nasir; Alias, Suriana

    2014-12-01

    In this paper, a comparison regarding the efficiency shares of listed companies in Bursa Malaysia was made, through the application of estimation method of Data Envelopment Analysis (DEA). In this study, DEA is used to measure efficiency shares of listed companies in Bursa Malaysia in terms of the financial performance. It is believed that only good financial performer will give a good return to the investors in the long run. The main objectives were to compute the relative efficiency scores of the shares in Bursa Malaysia and rank the shares based on Balance Index with regard to relative efficiency. The methods of analysis using Alirezaee and Afsharian's model were employed to this study; where the originality of Charnes, Cooper and Rhode model (CCR) with assumption of constant return to scale (CRS) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by using Balance Index. From the result, the companies that were recommended for investors based on ranking were NATWIDE, YTL and MUDA. These companies were the top three efficient companies with good performance in 2011 whereas in 2012 the top three companies were NATWIDE, MUDA and BERNAS.

  16. Metabolic modeling of energy balances in Mycoplasma hyopneumoniae shows that pyruvate addition increases growth rate.

    Science.gov (United States)

    Kamminga, Tjerko; Slagman, Simen-Jan; Bijlsma, Jetta J E; Martins Dos Santos, Vitor A P; Suarez-Diez, Maria; Schaap, Peter J

    2017-10-01

    Mycoplasma hyopneumoniae is cultured on large-scale to produce antigen for inactivated whole-cell vaccines against respiratory disease in pigs. However, the fastidious nutrient requirements of this minimal bacterium and the low growth rate make it challenging to reach sufficient biomass yield for antigen production. In this study, we sequenced the genome of M. hyopneumoniae strain 11 and constructed a high quality constraint-based genome-scale metabolic model of 284 chemical reactions and 298 metabolites. We validated the model with time-series data of duplicate fermentation cultures to aim for an integrated model describing the dynamic profiles measured in fermentations. The model predicted that 84% of cellular energy in a standard M. hyopneumoniae cultivation was used for non-growth associated maintenance and only 16% of cellular energy was used for growth and growth associated maintenance. Following a cycle of model-driven experimentation in dedicated fermentation experiments, we were able to increase the fraction of cellular energy used for growth through pyruvate addition to the medium. This increase in turn led to an increase in growth rate and a 2.3 times increase in the total biomass concentration reached after 3-4 days of fermentation, enhancing the productivity of the overall process. The model presented provides a solid basis to understand and further improve M. hyopneumoniae fermentation processes. Biotechnol. Bioeng. 2017;114: 2339-2347. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Energetics and efficiency of a molecular motor model

    DEFF Research Database (Denmark)

    Fogedby, Hans C.; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination...... with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor is addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10 pct in a heuristic description of the motor to about 1 per mille...... when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action....

  19. Optimization of MC model of HPGe detector efficiency

    International Nuclear Information System (INIS)

    Kovacik, A.

    2009-01-01

    Peak efficiency of HPGe detector is limited by several factors such as the probability of interaction of gamma quanta in the detector, sample geometry, measurement geometry and energy of emitted gamma quanta. Computer modelling using Monte Carlo is one of the options with which to evaluate the effectiveness of the detector for an optional shape and composition of the sample. The accuracy of this method is limited by accurate knowledge of the size and composition of all materials of detector, including dead layers in the active volume of germanium crystal, which is a quantity which cannot be directly measured. This work, among other things, investigated the effect of the thickness of dead layers at peak efficiency and look for their small size, by comparing modelled and experimentally determined efficiency. (author)

  20. Model based design of efficient power take-off systems for wave energy converters

    DEFF Research Database (Denmark)

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2011-01-01

    an essential part of the PTO, being the only technology having the required force densities. The focus of this paper is to show the achievable efficiency of a PTO system based on a conventional hydro-static transmission topology. The design is performed using a model based approach. Generic component models...

  1. Simple solvable energy-landscape model that shows a thermodynamic phase transition and a glass transition.

    Science.gov (United States)

    Naumis, Gerardo G

    2012-06-01

    When a liquid melt is cooled, a glass or phase transition can be obtained depending on the cooling rate. Yet, this behavior has not been clearly captured in energy-landscape models. Here, a model is provided in which two key ingredients are considered in the landscape, metastable states and their multiplicity. Metastable states are considered as in two level system models. However, their multiplicity and topology allows a phase transition in the thermodynamic limit for slow cooling, while a transition to the glass is obtained for fast cooling. By solving the corresponding master equation, the minimal speed of cooling required to produce the glass is obtained as a function of the distribution of metastable states.

  2. Modeled hydrologic metrics show links between hydrology and the functional composition of stream assemblages.

    Science.gov (United States)

    Patrick, Christopher J; Yuan, Lester L

    2017-07-01

    Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.

  3. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  4. Simulating the market for automotive fuel efficiency: The SHRSIM model

    Energy Technology Data Exchange (ETDEWEB)

    Greene, D.L.

    1987-02-01

    This report describes a computer model for simulating the effects of uncertainty about future fuel prices and competitors' behavior on the market shares of an automobile manufacturer who is considering introducing technology to increase fuel efficiency. Starting with an initial sales distribution, a pivot-point multinomial logit technique is used to adjust market shares based on changes in the present value of the added fuel efficiency. These shifts are random because the model generates random fuel price projections using parameters supplied by the user. The user also controls the timing of introduction and obsolescence of technology. While the model was designed with automobiles in mind, it has more general applicability to energy using durable goods. The model is written in IBM BASIC for an IBM PC and compiled using the Microsoft QuickBASIC (trademark of the Microsoft corporation) compiler.

  5. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate...

  6. Efficient Bayesian Estimation and Combination of GARCH-Type Models

    NARCIS (Netherlands)

    D. David (David); L.F. Hoogerheide (Lennart)

    2010-01-01

    textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation

  7. Energy efficiency in nonprofit agencies: Creating effective program models

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M.A.; Prindle, B.; Scherr, M.I.; White, D.L.

    1990-08-01

    Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programs for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.

  8. Downscaling CMIP5 climate models shows increased tropical cyclone activity over the 21st century.

    Science.gov (United States)

    Emanuel, Kerry A

    2013-07-23

    A recently developed technique for simulating large [O(10(4))] numbers of tropical cyclones in climate states described by global gridded data is applied to simulations of historical and future climate states simulated by six Coupled Model Intercomparison Project 5 (CMIP5) global climate models. Tropical cyclones downscaled from the climate of the period 1950-2005 are compared with those of the 21st century in simulations that stipulate that the radiative forcing from greenhouse gases increases by over preindustrial values. In contrast to storms that appear explicitly in most global models, the frequency of downscaled tropical cyclones increases during the 21st century in most locations. The intensity of such storms, as measured by their maximum wind speeds, also increases, in agreement with previous results. Increases in tropical cyclone activity are most prominent in the western North Pacific, but are evident in other regions except for the southwestern Pacific. The increased frequency of events is consistent with increases in a genesis potential index based on monthly mean global model output. These results are compared and contrasted with other inferences concerning the effect of global warming on tropical cyclones.

  9. An Efficient Framework Model for Optimizing Routing Performance in VANETs

    Science.gov (United States)

    Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala

    2018-01-01

    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED). PMID:29462884

  10. Young children with Down syndrome show normal development of circadian rhythms, but poor sleep efficiency: a cross-sectional study across the first 60 months of life.

    Science.gov (United States)

    Fernandez, Fabian; Nyhuis, Casandra C; Anand, Payal; Demara, Bianca I; Ruby, Norman F; Spanò, Goffredina; Clark, Caron; Edgin, Jamie O

    2017-05-01

    To evaluate sleep consolidation and circadian activity rhythms in infants and toddlers with Down syndrome (DS) under light and socially entrained conditions within a familiar setting. Given previous human and animal data suggesting intact circadian regulation of melatonin across the day and night, it was hypothesized that behavioral indices of circadian rhythmicity would likewise be intact in the sample with DS. A cross-sectional study of 66 infants and young children with DS, aged 5-67 months, and 43 typically developing age-matched controls. Sleep and measures of circadian robustness or timing were quantified using continuous in-home actigraphy recordings performed over seven days. Circadian robustness was quantified via time series analysis of rest-activity patterns. Phase markers of circadian timing were calculated alongside these values. Sleep efficiency was also estimated based on the actigraphy recordings. This study provided further evidence that general sleep quality is poor in infants and toddlers with DS, a population that has sleep apnea prevalence as high as 50% during the preschool years. Despite poor sleep quality, circadian rhythm and phase were preserved in children with DS and displayed similar developmental trajectories in cross-sectional comparisons with a typically developing (TD) cohort. In line with past work, lower sleep efficiency scores were quantified in the group with DS relative to TD children. Infants born with DS exhibited the worst sleep fragmentation; however, in both groups, sleep efficiency and consolidation increased across age. Three circadian phase markers showed that 35% of the recruitment sample with DS was phase-advanced to an earlier morning schedule, suggesting significant within-group variability in the timing of their daily activity rhythms. Circadian rhythms of wake and sleep are robust in children born with DS. The present results suggest that sleep fragmentation and any resultant cognitive deficits are likely not

  11. Animal Models for Muscular Dystrophy Show Different Patterns of Sarcolemmal Disruption

    OpenAIRE

    Straub, Volker; Rafael, Jill A.; Chamberlain, Jeffrey S.; Campbell, Kevin P.

    1997-01-01

    Genetic defects in a number of components of the dystrophin–glycoprotein complex (DGC) lead to distinct forms of muscular dystrophy. However, little is known about how alterations in the DGC are manifested in the pathophysiology present in dystrophic muscle tissue. One hypothesis is that the DGC protects the sarcolemma from contraction-induced damage. Using tracer molecules, we compared sarcolemmal integrity in animal models for muscular dystrophy and in muscular dystrophy patient samples. Ev...

  12. The PROMETHEUS bundled payment experiment: slow start shows problems in implementing new payment models.

    Science.gov (United States)

    Hussey, Peter S; Ridgely, M Susan; Rosenthal, Meredith B

    2011-11-01

    Fee-for-service payment is blamed for many of the problems observed in the US health care system. One of the leading alternative payment models proposed in the Affordable Care Act of 2010 is bundled payment, which provides payment for all of the care a patient needs over the course of a defined clinical episode, instead of paying for each discrete service. We evaluated the initial "road test" of PROMETHEUS Payment, one of several bundled payment pilot projects. The project has faced substantial implementation challenges, and none of the three pilot sites had executed contracts or made bundled payments as of May 2011. The pilots have taken longer to set up than expected, primarily because of the complexity of the payment model and the fact that it builds on the existing fee-for-service payment system and other complexities of health care. Participants continue to see promise and value in the bundled payment model, but the pilot results suggest that the desired benefits of this and other payment reforms may take time and considerable effort to materialize.

  13. A Murine Model of Candida glabrata Vaginitis Shows No Evidence of an Inflammatory Immunopathogenic Response.

    Directory of Open Access Journals (Sweden)

    Evelyn E Nash

    Full Text Available Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC, particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans. Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH, and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.

  14. An endogenous green fluorescent protein-photoprotein pair in Clytia hemisphaerica eggs shows co-targeting to mitochondria and efficient bioluminescence energy transfer.

    Science.gov (United States)

    Fourrage, Cécile; Swann, Karl; Gonzalez Garcia, Jose Raul; Campbell, Anthony K; Houliston, Evelyn

    2014-04-09

    Green fluorescent proteins (GFPs) and calcium-activated photoproteins of the aequorin/clytin family, now widely used as research tools, were originally isolated from the hydrozoan jellyfish Aequora victoria. It is known that bioluminescence resonance energy transfer (BRET) is possible between these proteins to generate flashes of green light, but the native function and significance of this phenomenon is unclear. Using the hydrozoan Clytia hemisphaerica, we characterized differential expression of three clytin and four GFP genes in distinct tissues at larva, medusa and polyp stages, corresponding to the major in vivo sites of bioluminescence (medusa tentacles and eggs) and fluorescence (these sites plus medusa manubrium, gonad and larval ectoderms). Potential physiological functions at these sites include UV protection of stem cells for fluorescence alone, and prey attraction and camouflaging counter-illumination for bioluminescence. Remarkably, the clytin2 and GFP2 proteins, co-expressed in eggs, show particularly efficient BRET and co-localize to mitochondria, owing to parallel acquisition by the two genes of mitochondrial targeting sequences during hydrozoan evolution. Overall, our results indicate that endogenous GFPs and photoproteins can play diverse roles even within one species and provide a striking and novel example of protein coevolution, which could have facilitated efficient or brighter BRET flashes through mitochondrial compartmentalization.

  15. Global thermal niche models of two European grasses show high invasion risks in Antarctica.

    Science.gov (United States)

    Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel

    2017-07-01

    The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species. © 2017 John Wiley & Sons Ltd.

  16. Modeling high-efficiency quantum dot sensitized solar cells.

    Science.gov (United States)

    González-Pedro, Victoria; Xu, Xueqing; Mora-Seró, Iván; Bisquert, Juan

    2010-10-26

    With energy conversion efficiencies in continuous growth, quantum dot sensitized solar cells (QDSCs) are currently under an increasing interest, but there is an absence of a complete model for these devices. Here, we compile the latest developments in this kind of cells in order to attain high efficiency QDSCs, modeling the performance. CdSe QDs have been grown directly on a TiO(2) surface by successive ionic layer adsorption and reaction to ensure high QD loading. ZnS coating and previous growth of CdS were analyzed. Polysulfide electrolyte and Cu(2)S counterelectrodes were used to provide higher photocurrents and fill factors, FF. Incident photon-to-current efficiency peaks as high as 82%, under full 1 sun illumination, were obtained, which practically overcomes the photocurrent limitation commonly observed in QDSCs. High power conversion efficiency of up to 3.84% under full 1 sun illumination (V(oc) = 0.538 V, j(sc) = 13.9 mA/cm(2), FF = 0.51) and the characterization and modeling carried out indicate that recombination has to be overcome for further improvement of QDSC.

  17. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  18. ASIC1a Deficient Mice Show Unaltered Neurodegeneration in the Subacute MPTP Model of Parkinson Disease.

    Science.gov (United States)

    Komnig, Daniel; Imgrund, Silke; Reich, Arno; Gründer, Stefan; Falkenburger, Björn H

    2016-01-01

    Inflammation contributes to the death of dopaminergic neurons in Parkinson disease and can be accompanied by acidification of extracellular pH, which may activate acid-sensing ion channels (ASIC). Accordingly, amiloride, a non-selective inhibitor of ASIC, was protective in an acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) mouse model of Parkinson disease. To complement these findings we determined MPTP toxicity in mice deficient for ASIC1a, the most common ASIC isoform in neurons. MPTP was applied i.p. in doses of 30 mg per kg on five consecutive days. We determined the number of dopaminergic neurons in the substantia nigra, assayed by stereological counting 14 days after the last MPTP injection, the number of Nissl positive neurons in the substantia nigra, and the concentration of catecholamines in the striatum. There was no difference between ASIC1a-deficient mice and wildtype controls. We are therefore not able to confirm that ASIC1a are involved in MPTP toxicity. The difference might relate to the subacute MPTP model we used, which more closely resembles the pathogenesis of Parkinson disease, or to further targets of amiloride.

  19. Progesterone treatment shows benefit in a pediatric model of moderate to severe bilateral brain injury.

    Directory of Open Access Journals (Sweden)

    Rastafa I Geddes

    Full Text Available Controlled cortical impact (CCI models in adult and aged Sprague-Dawley (SD rats have been used extensively to study medial prefrontal cortex (mPFC injury and the effects of post-injury progesterone treatment, but the hormone's effects after traumatic brain injury (TBI in juvenile animals have not been determined. In the present proof-of-concept study we investigated whether progesterone had neuroprotective effects in a pediatric model of moderate to severe bilateral brain injury.Twenty-eight-day old (PND 28 male Sprague Dawley rats received sham (n = 24 or CCI (n = 47 injury and were given progesterone (4, 8, or 16 mg/kg per 100 g body weight or vehicle injections on post-injury days (PID 1-7, subjected to behavioral testing from PID 9-27, and analyzed for lesion size at PID 28.The 8 and 16 mg/kg doses of progesterone were observed to be most beneficial in reducing the effect of CCI on lesion size and behavior in PND 28 male SD rats.Our findings suggest that a midline CCI injury to the frontal cortex will reliably produce a moderate TBI comparable to what is seen in the adult male rat and that progesterone can ameliorate the injury-induced deficits.

  20. ASIC1a Deficient Mice Show Unaltered Neurodegeneration in the Subacute MPTP Model of Parkinson Disease.

    Directory of Open Access Journals (Sweden)

    Daniel Komnig

    Full Text Available Inflammation contributes to the death of dopaminergic neurons in Parkinson disease and can be accompanied by acidification of extracellular pH, which may activate acid-sensing ion channels (ASIC. Accordingly, amiloride, a non-selective inhibitor of ASIC, was protective in an acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP mouse model of Parkinson disease. To complement these findings we determined MPTP toxicity in mice deficient for ASIC1a, the most common ASIC isoform in neurons. MPTP was applied i.p. in doses of 30 mg per kg on five consecutive days. We determined the number of dopaminergic neurons in the substantia nigra, assayed by stereological counting 14 days after the last MPTP injection, the number of Nissl positive neurons in the substantia nigra, and the concentration of catecholamines in the striatum. There was no difference between ASIC1a-deficient mice and wildtype controls. We are therefore not able to confirm that ASIC1a are involved in MPTP toxicity. The difference might relate to the subacute MPTP model we used, which more closely resembles the pathogenesis of Parkinson disease, or to further targets of amiloride.

  1. Modeling of detective quantum efficiency considering scatter-reduction devices

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.

  2. A zebrafish model of glucocorticoid resistance shows serotonergic modulation of the stress response

    Directory of Open Access Journals (Sweden)

    Brian eGriffiths

    2012-10-01

    Full Text Available One function of glucocorticoids is to restore homeostasis after an acute stress response by providing negative feedback to stress circuits in the brain. Loss of this negative feedback leads to elevated physiological stress and may contribute to depression, anxiety and post-traumatic stress disorder. We investigated the early, developmental effects of glucocorticoid signaling deficits on stress physiology and related behaviors using a mutant zebrafish, grs357, with non-functional glucocorticoid receptors. These mutants are morphologically inconspicuous and adult-viable. A previous study of adult grs357 mutants showed loss of glucocorticoid-mediated negative feedback and elevated physiological and behavioral stress markers. Already at five days post-fertilization, mutant larvae had elevated whole body cortisol, increased expression of pro-opiomelanocortin (POMC, the precursor of adrenocorticotropic hormone (ACTH, and failed to show normal suppression of stress markers after dexamethasone treatment. Mutant larvae had larger auditory-evoked startle responses compared to wildtype sibling controls (grwt, despite having lower spontaneous activity levels. Fluoxetine (Prozac treatment in mutants decreased startle responding and increased spontaneous activity, making them behaviorally similar to wildtype. This result mirrors known effects of selective serotonin reuptake inhibitors (SSRIs in modifying glucocorticoid signaling and alleviating stress disorders in human patients. Our results suggest that larval grs357 zebrafish can be used to study behavioral, physiological and molecular aspects of stress disorders. Most importantly, interactions between glucocorticoid and serotonin signaling appear to be highly conserved among vertebrates, suggesting deep homologies at the neural circuit level and opening up new avenues for research into psychiatric conditions.

  3. Efficient anisotropic wavefield extrapolation using effective isotropic models

    KAUST Repository

    Alkhalifah, Tariq Ali

    2013-06-10

    Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented in the high-frequency asymptotic approximation by the eikonal equation, to develop effective isotropic models, which are used to efficiently and approximately extrapolate anisotropic wavefields using the isotropic, relatively cheaper, operators. These effective velocity models are source dependent and tend to embed the anisotropy in the inhomogeneity. Though this isotropically generated wavefield theoretically shares the same kinematic behavior as that of the first arrival anisotropic wavefield, it also has the ability to include all the arrivals resulting from a complex wavefield propagation. In fact, the effective models reduce to the original isotropic model in the limit of isotropy, and thus, the difference between the effective model and, for example, the vertical velocity depends on the strength of anisotropy. For reverse time migration (RTM), effective models are developed for the source and receiver fields by computing the traveltime for a plane wave source stretching along our source and receiver lines in a delayed shot migration implementation. Applications to the BP TTI model demonstrates the effectiveness of the approach.

  4. The technology gap and efficiency measure in WEC countries: Application of the hybrid meta frontier model

    International Nuclear Information System (INIS)

    Chiu, Yung-Ho; Lee, Jen-Hui; Lu, Ching-Cheng; Shyu, Ming-Kuang; Luo, Zhengying

    2012-01-01

    This study develops the hybrid meta frontier DEA model for which inputs are distinguished into radial inputs that change proportionally and non-radial inputs that change non-proportionally, in order to measure the technical efficiency and technology gap ratios (TGR) of four different regions: Asia, Africa, America, and Europe. This paper selects 87 countries that are members of the World Energy Council from 2005 to 2007. The input variables are industry and population, while the output variances are gross domestic product (GDP) and the amount of fossil-fuel CO 2 emissions. The result shows that countries’ efficiency ranking among their own region presents more implied volatility. In view of the Technology Gap Ratio, Europe is the most efficient of any region, but during the same period, Asia has a lower efficiency than other regions. Finally, regions with higher industry (or GDP) might not have higher efficiency from 2005 to 2007. And higher CO 2 emissions or population also might not mean lower efficiency for other regions. In addition, Brazil is not OECD member, but it is higher efficiency than other OECD members in emerging countries case. OECD countries are better efficiency than non-OECD countries and Europe is higher than Asia to control CO 2 emissions. If non-OECD countries or Asia countries could reach the best efficiency score, they should try to control CO 2 emissions. - Highlights: ► The new meta frontier Model for evaluating the efficiency and technology gap ratios. ► Higher CO 2 emissions might not lower efficiency than any other regions, like Europe. ► Asia’s output and CO 2 emissions simultaneously increased and lower of its efficiency. ► Non-OECD or Asia countries should control CO 2 emissions to reach best efficiency score.

  5. An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengwang Ye

    2017-01-01

    Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.

  6. Investigation on the Efficiency of Financial Companies in Malaysia with Data Envelopment Analysis Model

    Science.gov (United States)

    Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam

    2018-04-01

    Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.

  7. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  8. Metabolic remodeling agents show beneficial effects in the dystrophin-deficient mdx mouse model

    Directory of Open Access Journals (Sweden)

    Jahnke Vanessa E

    2012-08-01

    Full Text Available Abstract Background Duchenne muscular dystrophy is a genetic disease involving a severe muscle wasting that is characterized by cycles of muscle degeneration/regeneration and culminates in early death in affected boys. Mitochondria are presumed to be involved in the regulation of myoblast proliferation/differentiation; enhancing mitochondrial activity with exercise mimetics (AMPK and PPAR-delta agonists increases muscle function and inhibits muscle wasting in healthy mice. We therefore asked whether metabolic remodeling agents that increase mitochondrial activity would improve muscle function in mdx mice. Methods Twelve-week-old mdx mice were treated with two different metabolic remodeling agents (GW501516 and AICAR, separately or in combination, for 4 weeks. Extensive systematic behavioral, functional, histological, biochemical, and molecular tests were conducted to assess the drug(s' effects. Results We found a gain in body and muscle weight in all treated mice. Histologic examination showed a decrease in muscle inflammation and in the number of fibers with central nuclei and an increase in fibers with peripheral nuclei, with significantly fewer activated satellite cells and regenerating fibers. Together with an inhibition of FoXO1 signaling, these results indicated that the treatments reduced ongoing muscle damage. Conclusions The three treatments produced significant improvements in disease phenotype, including an increase in overall behavioral activity and significant gains in forelimb and hind limb strength. Our findings suggest that triggering mitochondrial activity with exercise mimetics improves muscle function in dystrophin-deficient mdx mice.

  9. Male Wistar rats show individual differences in an animal model of conformity.

    Science.gov (United States)

    Jolles, Jolle W; de Visser, Leonie; van den Bos, Ruud

    2011-09-01

    Conformity refers to the act of changing one's behaviour to match that of others. Recent studies in humans have shown that individual differences exist in conformity and that these differences are related to differences in neuronal activity. To understand the neuronal mechanisms in more detail, animal tests to assess conformity are needed. Here, we used a test of conformity in rats that has previously been evaluated in female, but not male, rats and assessed the nature of individual differences in conformity. Male Wistar rats were given the opportunity to learn that two diets differed in palatability. They were subsequently exposed to a demonstrator that had consumed the less palatable food. Thereafter, they were exposed to the same diets again. Just like female rats, male rats decreased their preference for the more palatable food after interaction with demonstrator rats that had eaten the less palatable food. Individual differences existed for this shift, which were only weakly related to an interaction between their own initial preference and the amount consumed by the demonstrator rat. The data show that this conformity test in rats is a promising tool to study the neurobiology of conformity.

  10. Modeling serotonin uptake in the lung shows endothelial transporters dominate over cleft permeation

    Science.gov (United States)

    Bassingthwaighte, James B.

    2013-01-01

    A four-region (capillary plasma, endothelium, interstitial fluid, cell) multipath model was configured to describe the kinetics of blood-tissue exchange for small solutes in the lung, accounting for regional flow heterogeneity, permeation of cell membranes and through interendothelial clefts, and intracellular reactions. Serotonin uptake data from the Multiple indicator dilution “bolus sweep” experiments of Rickaby and coworkers (Rickaby DA, Linehan JH, Bronikowski TA, Dawson CA. J Appl Physiol 51: 405–414, 1981; Rickaby DA, Dawson CA, and Linehan JH. J Appl Physiol 56: 1170–1177, 1984) and Malcorps et al. (Malcorps CM, Dawson CA, Linehan JH, Bronikowski TA, Rickaby DA, Herman AG, Will JA. J Appl Physiol 57: 720–730, 1984) were analyzed to distinguish facilitated transport into the endothelial cells (EC) and the inhibition of tracer transport by nontracer serotonin in the bolus of injectate from the free uninhibited permeation through the clefts into the interstitial fluid space. The permeability-surface area products (PS) for serotonin via the inter-EC clefts were ∼0.3 ml·g−1·min−1, low compared with the transporter-mediated maximum PS of 13 ml·g−1·min−1 (with Km = ∼0.3 μM and Vmax = ∼4 nmol·g−1·min−1). The estimates of serotonin PS values for EC transporters from their multiple data sets were similar and were influenced only modestly by accounting for the cleft permeability in parallel. The cleft PS estimates in these Ringer-perfused lungs are less than half of those for anesthetized dogs (Yipintsoi T. Circ Res 39: 523–531, 1976) with normal hematocrits, but are compatible with passive noncarrier-mediated transport observed later in the same laboratory (Dawson CA, Linehan JH, Rickaby DA, Bronikowski TA. Ann Biomed Eng 15: 217–227, 1987; Peeters FAM, Bronikowski TA, Dawson CA, Linehan JH, Bult H, Herman AG. J Appl Physiol 66: 2328–2337, 1989) The identification and quantitation of the cleft pathway conductance from these

  11. Efficient mixed integer programming models for family scheduling problems

    Directory of Open Access Journals (Sweden)

    Meng-Ye Lin

    Full Text Available This paper proposes several mixed integer programming models which incorporate optimal sequence properties into the models, to solve single machine family scheduling problems. The objectives are total weighted completion time and maximum lateness, respectively. Experiment results indicate that there are remarkable improvements in computational efficiency when optimal sequence properties are included in the models. For the total weighted completion time problems, the best model solves all of the problems up to 30-jobs within 5 s, all 50-job problems within 4 min and about 1/3 of the 75-job to 100-job problems within 1 h. For maximum lateness problems, the best model solves almost all the problems up to 30-jobs within 11 min and around half of the 50-job to 100-job problems within 1 h. Keywords: Family scheduling, Sequence independent setup, Total weighted completion time, Maximum lateness

  12. Investigating market efficiency through a forecasting model based on differential equations

    Science.gov (United States)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  13. Coupling R and PHREEQC: Efficient Programming of Geochemical Models

    OpenAIRE

    De Lucia, Marco; Kühn, Michael

    2013-01-01

    We present a new interface between the geochemical simulator PHREEQC and the open source language R. It represents a tool to flexibly and efficiently program and automate every aspect of geochemical modelling. The interface helps particularly to setup and run large numbers of simulations and visualise the results. Also profiting of numberless high-quality R extension packages, performing sensitivity analysis or Monte Carlo simulations becomes straightforward. Further, an algorithm to speedup ...

  14. Detailed models for timing and efficiency in resistive plate chambers

    CERN Document Server

    AUTHOR|(CDS)2067623; Lippmann, Christian

    2003-01-01

    We discuss detailed models for detector physics processes in Resistive Plate Chambers, in particular including the effect of attachment on the avalanche statistics. In addition, we present analytic formulas for average charges and intrinsic RPC time resolution. Using a Monte Carlo simulation including all the steps from primary ionization to the front-end electronics we discuss the dependence of efficiency and time resolution on parameters like primary ionization, avalanche statistics and threshold.

  15. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  16. Policy modeling for energy efficiency improvement in US industry

    International Nuclear Information System (INIS)

    Worrell, Ernst; Price, Lynn; Ruth, Michael

    2001-01-01

    We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy

  17. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  18. Economic efficiency versus social equality? The U.S. liberal model versus the European social model.

    Science.gov (United States)

    Navarro, Vicente; Schmitt, John

    2005-01-01

    This article begins by challenging the widely held view in neoliberal discourse that there is a necessary trade-off between higher efficiency and lower reduction of inequalities: the article empirically shows that the liberal, U.S. model has been less efficient economically (slower economic growth, higher unemployment) than the social model in existence in the European Union and in the majority of its member states. Based on the data presented, the authors criticize the adoption of features of the liberal model (such as deregulation of their labor markets, reduction of public social expenditures) by some European governments. The second section analyzes the causes for the slowdown of economic growth and the increase of unemployment in the European Union--that is, the application of monetarist and neoliberal policies in the institutional frame of the European Union, including the Stability Pact, the objectives and modus operandi of the European Central Bank, and the very limited resources available to the European Commission for stimulating and distributive functions. The third section details the reasons for these developments, including (besides historical considerations) the enormous influence of financial capital in the E.U. institutions and the very limited democracy. Proposals for change are included.

  19. Towards an efficient multiphysics model for nuclear reactor dynamics

    Directory of Open Access Journals (Sweden)

    Obaidurrahman K.

    2015-01-01

    Full Text Available Availability of fast computer resources nowadays has facilitated more in-depth modeling of complex engineering systems which involve strong multiphysics interactions. This multiphysics modeling is an important necessity in nuclear reactor safety studies where efforts are being made worldwide to combine the knowledge from all associated disciplines at one place to accomplish the most realistic simulation of involved phenomenon. On these lines coupled modeling of nuclear reactor neutron kinetics, fuel heat transfer and coolant transport is a regular practice nowadays for transient analysis of reactor core. However optimization between modeling accuracy and computational economy has always been a challenging task to ensure the adequate degree of reliability in such extensive numerical exercises. Complex reactor core modeling involves estimation of evolving 3-D core thermal state, which in turn demands an expensive multichannel based detailed core thermal hydraulics model. A novel approach of power weighted coupling between core neutronics and thermal hydraulics presented in this work aims to reduce the bulk of core thermal calculations in core dynamics modeling to a significant extent without compromising accuracy of computation. Coupled core model has been validated against a series of international benchmarks. Accuracy and computational efficiency of the proposed multiphysics model has been demonstrated by analyzing a reactivity initiated transient.

  20. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  1. Modeling of hybrid vehicle fuel economy and fuel engine efficiency

    Science.gov (United States)

    Wu, Wei

    "Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.

  2. The composite supply chain efficiency model: A case study of the Sishen-Saldanha supply chain

    Directory of Open Access Journals (Sweden)

    Leila L. Goedhals-Gerber

    2016-01-01

    Full Text Available As South Africa strives to be a major force in global markets, it is essential that South African supply chains achieve and maintain a competitive advantage. One approach to achieving this is to ensure that South African supply chains maximise their levels of efficiency. Consequently, the efficiency levels of South Africa’s supply chains must be evaluated. The objective of this article is to propose a model that can assist South African industries in becoming internationally competitive by providing them with a tool for evaluating their levels of efficiency both as individual firms and as a component in an overall supply chain. The Composite Supply Chain Efficiency Model (CSCEM was developed to measure supply chain efficiency across supply chains using variables identified as problem areas experienced by South African supply chains. The CSCEM is tested in this article using the Sishen-Saldanda iron ore supply chain as a case study. The results indicate that all three links or nodes along the Sishen-Saldanha iron ore supply chain performed well. The average efficiency of the rail leg was 97.34%, while the average efficiency of the mine and the port were 97% and 95.44%, respectively. The results also show that the CSCEM can be used by South African firms to measure their levels of supply chain efficiency. This article concludes with the benefits of the CSCEM.

  3. Determination of unique power conversion efficiency of solar cell showing hysteresis in the I-V curve under various light intensities.

    Science.gov (United States)

    Cojocaru, Ludmila; Uchida, Satoshi; Tamaki, Koichi; Jayaweera, Piyankarage V V; Kaneko, Shoji; Nakazaki, Jotaro; Kubo, Takaya; Segawa, Hiroshi

    2017-09-18

    Energy harvesting at low light intensities has recently attracted a great deal of attention of perovskite solar cells (PSCs) which are regarded as promising candidate for indoor application. Anomalous hysteresis of the PSCs a complex issue for reliable evaluation of the cell performance. In order to address these challenges, we constructed two new evaluation methods to determinate the power conversion efficiencies (PCEs) of PSCs. The first setup is a solar simulator based on light emitting diodes (LEDs) allowing evaluation of the solar cells at wider range of light intensities, ranging from 10 2  to 10 -3  mW·cm -2 . As the overestimate error, we found that the PCEs of dye sensitized solar cell (DSC) and PSCs increase dramatically at low light intensities conditions. Due to the internal capacitance at the interfaces on hybrid solar cells, the measurement of current below 10 -2  mW·cm -2 shows constant value given high PCE, which is related to the capacitive current and origin of the hysteresis. The second setup is a photovoltaic power analyzing system, designed for tracking the maximum power (P max ) with time. The paper suggests the combination of the LED solar simulator and P max tracking technique as a standard to evaluate the PCE of capacitive solar cells.

  4. KEEFEKTIFAN MODEL SHOW NOT TELL DAN MIND MAP PADA PEMBELAJARAN MENULIS TEKS EKSPOSISI BERDASARKAN MINAT PESERTA DIDIK KELAS X SMK

    Directory of Open Access Journals (Sweden)

    Wiwit Lili Sokhipah

    2015-03-01

    Full Text Available Tujuan penelitian ini adalah (1 menentukan keefektifan penggunaan model show not tell pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK Kelas X, (2 menentukan keefektifan penggunaan model mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X, (3 menentukan keefektifan interaksi show not tell dan mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X. Penelitian ini adalah quasi experimental design (pretes-postes control group design. Dalam desain ini terdapat dua kelompok eksperimen yakni penerapan model show not tell dalam pembelajaran keterampilan menulis teks eksposisipeserta didik dengan minat tinggi dan penerapan model mind map dalam pembelajaran keterampilan menulis teks eksposisi  peserta didik dengan minat rendah. Hasil penelitian adalah (1 model show not tell efektif digunakan  dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, (2 model mind map efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat rendah, dan (3 model show not tell lebih efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, sedangkan model mind map efektif digunakan dalam membelajarkan teks eksposisi pagi peserta didik yang memiliki minat rendah.

  5. A two-stage DEA model to evaluate sustainability and energy efficiency of tomato production

    Directory of Open Access Journals (Sweden)

    Hossein Raheli

    2017-12-01

    Full Text Available The aims of this study were to evaluate the sustainability and efficiency of tomato production and to investigate the determinants of inefficiency of tomato farming in Marand region of East Azerbaijan province, Iran. For these purposes, a two-stage methodology was applied, in which for the first time a fractional regression model (FRM was employed in the second stage of analysis, so that, in the first stage a nonparametric Data Envelopment Analysis (DEA was used to analyze the efficiencies of tomato production and in the second stage, farm specific variables such as education level, farmers’ age, total land size and use of manure were used in a fractional regression model to explain how these factors influenced efficiency of tomato farming. The results of the first stage showed that there are considerable differences between efficient and inefficient farmers in the studied area, so that the main differences were in the use of chemical fertilizers, biocides and water for irrigation. Also, the results of second stage revealed that farmers’ age, education level and total land size positively affected efficiency in tomato production. So, better use of land, chemical fertilizers, water for irrigation and improving the farmers’ educational levels through literacy campaign and land consolidation would probably increase the efficiency in the long term. Keywords: Data envelopment analysis, Energy efficiency, Fractional regression model, Tomato

  6. Efficient Parallel Execution of Event-Driven Electromagnetic Hybrid Models

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S [ORNL; Karimabadi, Dr. Homa [SciberQuest Inc.; Fujimoto, Richard [ORNL

    2007-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform traditional time-stepped models, especially in simulations containing multiple timescales. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that operate at timescales that differ by orders of magnitude. In contrast to time-stepped simulation which requires tightly coupled updates to almost the entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, in contrast to relative ease of parallelization of time-stepped codes, the parallelization of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work on parallelization was limited in scalability and runtime performance due to such challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel execution performance. The mapping of the model to simulation processes is optimized via aggregation techniques, and the parallel runtime engine is optimized for communication and memory efficiency. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms.

  7. Development of multicriteria models to classify energy efficiency alternatives

    International Nuclear Information System (INIS)

    Neves, Luis Pires; Antunes, Carlos Henggeler; Dias, Luis Candido; Martins, Antonio Gomes

    2005-01-01

    This paper aims at describing a novel constructive approach to develop decision support models to classify energy efficiency initiatives, including traditional Demand-Side Management and Market Transformation initiatives, overcoming the limitations and drawbacks of Cost-Benefit Analysis. A multicriteria approach based on the ELECTRE-TRI method is used, focusing on four perspectives: - an independent Agency with the aim of promoting energy efficiency; - Distribution-only utilities under a regulated framework; - the Regulator; - Supply companies in a competitive liberalized market. These perspectives were chosen after a system analysis of the decision situation regarding the implementation of energy efficiency initiatives, looking for the main roles and power relations, with the purpose of structuring the decision problem by identifying the actors, the decision makers, the decision paradigm, and the relevant criteria. The multicriteria models developed allow considering different kinds of impacts, but avoiding difficult measurements and unit conversions due to the nature of the multicriteria method chosen. The decision is then based on all the significant effects of the initiative, both positive and negative ones, including ancillary effects often forgotten in cost-benefit analysis. The ELECTRE-TRI, as most multicriteria methods, provides to the Decision Maker the ability of controlling the relevance each impact can have on the final decision. The decision support process encompasses a robustness analysis, which, together with a good documentation of the parameters supplied into the model, should support sound decisions. The models were tested with a set of real-world initiatives and compared with possible decisions based on Cost-Benefit analysis

  8. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  9. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    Science.gov (United States)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  10. An Application on Merton Model in the Non-efficient Market

    Science.gov (United States)

    Feng, Yanan; Xiao, Qingxian

    Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.

  11. Study on the model SGL-83 filtration efficiency measuring device

    International Nuclear Information System (INIS)

    Gu Naigu; Lu Zhenlin; Wang Lihua; Sun Aie; Jiang Chunyu

    1985-01-01

    The advantages of the device are simple in structure, easy for operation and stable in performance. It provides multifunction such as generating both the small size aerosol and large size dust, conducting the filtration efficiency and performance tests of dusts protection equipment or filter media. Some new thchniques have been employed to produce test dust: Ultrasonic fog generator for producing aerosol and silicon controlled electromagnetic oscillation methods for generating dust. Therefore the concentration and quantity of the test dust are stable and controllable. The device has the maximum detectable efficiency of 99.99% for aerosol and 99.5% for dust. The tests on various dust protection equipment and filter midia showed the good stability and reproducibility of the device

  12. Models for electricity market efficiency and bidding strategy analysis

    Science.gov (United States)

    Niu, Hui

    This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have

  13. Modeling hospital infrastructure by optimizing quality, accessibility and efficiency via a mixed integer programming model.

    Science.gov (United States)

    Ikkersheim, David; Tanke, Marit; van Schooten, Gwendy; de Bresser, Niels; Fleuren, Hein

    2013-06-16

    The majority of curative health care is organized in hospitals. As in most other countries, the current 94 hospital locations in the Netherlands offer almost all treatments, ranging from rather basic to very complex care. Recent studies show that concentration of care can lead to substantial quality improvements for complex conditions and that dispersion of care for chronic conditions may increase quality of care. In previous studies on allocation of hospital infrastructure, the allocation is usually only based on accessibility and/or efficiency of hospital care. In this paper, we explore the possibilities to include a quality function in the objective function, to give global directions to how the 'optimal' hospital infrastructure would be in the Dutch context. To create optimal societal value we have used a mathematical mixed integer programming (MIP) model that balances quality, efficiency and accessibility of care for 30 ICD-9 diagnosis groups. Typical aspects that are taken into account are the volume-outcome relationship, the maximum accepted travel times for diagnosis groups that may need emergency treatment and the minimum use of facilities. The optimal number of hospital locations per diagnosis group varies from 12-14 locations for diagnosis groups which have a strong volume-outcome relationship, such as neoplasms, to 150 locations for chronic diagnosis groups such as diabetes and chronic obstructive pulmonary disease (COPD). In conclusion, our study shows a new approach for allocating hospital infrastructure over a country or certain region that includes quality of care in relation to volume per provider that can be used in various countries or regions. In addition, our model shows that within the Dutch context chronic care may be too concentrated and complex and/or acute care may be too dispersed. Our approach can relatively easily be adopted towards other countries or regions and is very suitable to perform a 'what-if' analysis.

  14. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    Science.gov (United States)

    Lin, Cheng

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697

  15. A traction control strategy with an efficiency model in a distributed driving electric vehicle.

    Science.gov (United States)

    Lin, Cheng; Cheng, Xingqun

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention.

  16. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Cheng Lin

    2014-01-01

    Full Text Available Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels’ slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels’ slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver’s intention.

  17. Opportunities and Efficiencies in Building a New Service Desk Model.

    Science.gov (United States)

    Mayo, Alexa; Brown, Everly; Harris, Ryan

    2017-01-01

    In July 2015, the Health Sciences and Human Services Library (HS/HSL) at the University of Maryland, Baltimore (UMB), merged its reference and circulation services, creating the Information Services Department and Information Services Desk. Designing the Information Services Desk with a team approach allowed for the re-examination of the HS/HSL's service model from the ground up. With the creation of a single service point, the HS/HSL was able to create efficiencies, improve the user experience by eliminating handoffs, create a collaborative team environment, and engage information services staff in a variety of new projects.

  18. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  19. Asymmetric transfer efficiencies between fomites and fingers: Impact on model parameterization.

    Science.gov (United States)

    Greene, Christine; Ceron, Nancy Hernandez; Eisenberg, Marisa C; Koopman, James; Miller, Jesse D; Xi, Chuanwu; Eisenberg, Joseph N S

    2018-01-31

    Healthcare-associated infections (HAIs) affect millions of patients every year. Pathogen transmission via fomites and healthcare workers (HCWs) contribute to the persistence of HAIs in hospitals. A critical parameter needed to assess risk of environmental transmission is the pathogen transfer efficiency between fomites and fingers. Recent studies have shown that pathogen transfer is not symmetric. In this study,we evaluated how the commonly used assumption of symmetry in transfer efficiency changes the dynamics of pathogen movement between patients and rooms and the exposures to uncolonized patients. We developed and analyzed a deterministic compartmental model of Acinetobacter baumannii describing the contact-mediated process among HCWs, patients, and the environment. We compared a system using measured asymmetrical transfer efficiency to 2 symmetrical transfer efficiency systems. Symmetric models consistently overestimated contamination levels on fomites and underestimated contamination on patients and HCWs compared to the asymmetrical model. The magnitudes of these miscalculations can exceed 100%. Regardless of the model, relative percent reductions in contamination declined after hand hygiene compliance reached approximately 60% in the large fomite scenario and 70% in the small fomite scenario. This study demonstrates how healthcare facility-specific data can be used for decision-making processes. We show that the incorrect use of transfer efficiency data leads to biased effectiveness estimates for intervention strategies. More accurate exposure models are needed for more informed infection prevention strategies. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  20. Efficient dynamic modeling of manipulators containing closed kinematic loops

    Science.gov (United States)

    Ferretti, Gianni; Rocco, Paolo

    An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.

  1. Comparing efficient data structures to represent geometric models for three-dimensional virtual medical training.

    Science.gov (United States)

    Bíscaro, Helton H; Nunes, Fátima L S; Dos Santos Oliveira, Jéssica; Pereira, Gustavo R

    2016-10-01

    Data structures have been explored for several domains of computer applications in order to ensure efficiency in the data store and retrieval. However, data structures can present different behavior depending on applications that they are being used. Three-dimensional interactive environments offered by techniques of Virtual Reality require operations of loading and manipulating objects in real time, where realism and response time are two important requirements. Efficient representation of geometrical models plays an important part so that the simulation may become real. In this paper, we present the implementation and the comparison of two topologically efficient data structures - Compact Half-Edge and Mate-Face - for the representation of objects for three-dimensional interactive environments. The structures have been tested at different conditions of processors and RAM memories. The results show that both these structures can be used in an efficient manner. Mate-Face structure has shown itself to be more efficient for the manipulation of neighborhood relationships and the Compact Half-Edge was more efficient for loading of the geometric models. We also evaluated the data structures embedded in applications of biopsy simulation using virtual reality, considering a deformation simulation method applied in virtual human organs. The results showed that their use allows the building of applications considering objects with high resolutions (number of vertices), without significant impact in the time spent in the simulation. Therefore, their use contributes for the construction of more realistic simulators. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. sybil – Efficient constraint-based modelling in R

    Science.gov (United States)

    2013-01-01

    Background Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Results Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Conclusions Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN). PMID:24224957

  3. Sybil--efficient constraint-based modelling in R.

    Science.gov (United States)

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  4. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    Science.gov (United States)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  5. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  6. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  7. Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.

    Directory of Open Access Journals (Sweden)

    Zhiwen Yu

    Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.

  8. Co-inoculation of a Pea Core-Collection with Diverse Rhizobial Strains Shows Competitiveness for Nodulation and Efficiency of Nitrogen Fixation Are Distinct traits in the Interaction.

    Science.gov (United States)

    Bourion, Virginie; Heulin-Gotty, Karine; Aubert, Véronique; Tisseyre, Pierre; Chabert-Martinello, Marianne; Pervent, Marjorie; Delaitre, Catherine; Vile, Denis; Siol, Mathieu; Duc, Gérard; Brunel, Brigitte; Burstin, Judith; Lepetit, Marc

    2017-01-01

    Pea forms symbiotic nodules with Rhizobium leguminosarum sv. viciae (Rlv). In the field, pea roots can be exposed to multiple compatible Rlv strains. Little is known about the mechanisms underlying the competitiveness for nodulation of Rlv strains and the ability of pea to choose between diverse compatible Rlv strains. The variability of pea-Rlv partner choice was investigated by co-inoculation with a mixture of five diverse Rlv strains of a 104-pea collection representative of the variability encountered in the genus Pisum . The nitrogen fixation efficiency conferred by each strain was determined in additional mono-inoculation experiments on a subset of 18 pea lines displaying contrasted Rlv choice. Differences in Rlv choice were observed within the pea collection according to their genetic or geographical diversities. The competitiveness for nodulation of a given pea-Rlv association evaluated in the multi-inoculated experiment was poorly correlated with its nitrogen fixation efficiency determined in mono-inoculation. Both plant and bacterial genetic determinants contribute to pea-Rlv partner choice. No evidence was found for co-selection of competitiveness for nodulation and nitrogen fixation efficiency. Plant and inoculant for an improved symbiotic association in the field must be selected not only on nitrogen fixation efficiency but also for competitiveness for nodulation.

  9. Electrodynamical Model of Quasi-Efficient Financial Markets

    Science.gov (United States)

    Ilinski, Kirill N.; Stepanenko, Alexander S.

    The modelling of financial markets presents a problem which is both theoretically challenging and practically important. The theoretical aspects concern the issue of market efficiency which may even have political implications [1], whilst the practical side of the problem has clear relevance to portfolio management [2] and derivative pricing [3]. Up till now all market models contain "smart money" traders and "noise" traders whose joint activity constitutes the market [4, 5]. On a short time scale this traditional separation does not seem to be realistic, and is hardly acceptable since all high-frequency market participants are professional traders and cannot be separated into "smart" and "noisy." In this paper we present a "microscopic" model with homogenuous quasi-rational behaviour of traders, aiming to describe short time market behaviour. To construct the model we use an analogy between "screening" in quantum electrodynamics and an equilibration process in a market with temporal mispricing [6, 7]. As a result, we obtain the time-dependent distribution function of the returns which is in quantitative agreement with real market data and obeys the anomalous scaling relations recently reported for both high-frequency exchange rates [8], S&P500 [9] and other stock market indices [10, 11].

  10. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2016-09-05

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable. © The Author(s) 2016.

  11. A Computational Model of Pattern Separation Efficiency in the Dentate Gyrus with Implications in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2015-03-01

    Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.

  12. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  13. A CAD model for energy efficient offshore structures for desalination and energy generation

    Directory of Open Access Journals (Sweden)

    R. Rahul Dev,

    2016-09-01

    Full Text Available This paper presents a ‘Computer Aided Design (CAD’ model for energy efficient design of offshore structures. In the CAD model preliminary dimensions and geometric details of an offshore structure (i.e. semi-submersible are optimized to achieve a favorable range of motion to reduce the energy consumed by the ‘Dynamic Position System (DPS’. The presented model allows the designer to select the configuration satisfying the user requirements and integration of Computer Aided Design (CAD and Computational Fluid Dynamics (CFD. The integration of CAD with CFD computes a hydrodynamically and energy efficient hull form. Our results show that the implementation of the present model results into an design that can serve the user specified requirements with less cost and energy consumption.

  14. Simple Hybrid Model for Efficiency Optimization of Induction Motor Drives with Its Experimental Validation

    Directory of Open Access Journals (Sweden)

    Branko Blanuša

    2013-01-01

    Full Text Available New hybrid model for efficiency optimization of induction motor drives (IMD is presented in this paper. It combines two strategies for efficiency optimization: loss model control and search control. Search control technique is used in a steady state of drive and loss model during transient processes. As a result, power and energy losses are reduced, especially when load torque is significant less related to its rated value. Also, this hybrid method gives fast convergence to operating point of minimal power losses and shows negligible sensitivity to motor parameter changes regarding other published optimization strategies. This model is implemented in vector control induction motor drive. Simulations and experimental tests are performed. Results are presented in this paper.

  15. Integer Representations towards Efficient Counting in the Bit Probe Model

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet

    2011-01-01

    Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n  − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...

  16. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  17. Building an Efficient Model for Afterburn Energy Release

    Energy Technology Data Exchange (ETDEWEB)

    Alves, S; Kuhl, A; Najjar, F; Tringe, J; McMichael, L; Glascoe, L

    2012-02-03

    Many explosives will release additional energy after detonation as the detonation products mix with the ambient environment. This additional energy release, referred to as afterburn, is due to combustion of undetonated fuel with ambient oxygen. While the detonation energy release occurs on a time scale of microseconds, the afterburn energy release occurs on a time scale of milliseconds with a potentially varying energy release rate depending upon the local temperature and pressure. This afterburn energy release is not accounted for in typical equations of state, such as the Jones-Wilkins-Lee (JWL) model, used for modeling the detonation of explosives. Here we construct a straightforward and efficient approach, based on experiments and theory, to account for this additional energy release in a way that is tractable for large finite element fluid-structure problems. Barometric calorimeter experiments have been executed in both nitrogen and air environments to investigate the characteristics of afterburn for C-4 and other materials. These tests, which provide pressure time histories, along with theoretical and analytical solutions provide an engineering basis for modeling afterburn with numerical hydrocodes. It is toward this end that we have constructed a modified JWL equation of state to account for afterburn effects on the response of structures to blast. The modified equation of state includes a two phase afterburn energy release to represent variations in the energy release rate and an afterburn energy cutoff to account for partial reaction of the undetonated fuel.

  18. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  19. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.

    2010-09-26

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  20. Environmental assessment of a city on the model of energy-ecological efficiency

    Directory of Open Access Journals (Sweden)

    Kuzovkina Tat’yana Vladimirovna

    Full Text Available This article gives an overview of the analytical methodology for assessing the environmental safety in construction, the existing government programs in energy saving, and the analysis of the actual state of the investigated problem, proposed a method of assessment of environmental safety efficiency criteria of a city. The analysis is based on the data on housing and communal services of the City of Moscow. As a result of the consideration of the government programs and methods of assessing the environmental security in construction the conclusion was made that none of the programs reviewed and non of the methods include consideration of the relationship between environmental parameters of environmental security and energy efficiency (indicators of them are considered separately from each other. In order to determine the actual state of environmental safety analytical review was performed of energy efficiency programs of the government in Moscow and the methods of assessing the environmental safety of a construction. After considering a methodology for assessing the environmental safety of a construction, the author proposes to use the model for determining the indicator of efficiency of the city to ensure the environmental safety of the processes of life-support of the city, which takes into account the dependence of the parameters of environmental safety and energy efficiency. The author describes the criteria for selecting thr data on energy and environmental efficiency of the city. The article shows the sequence to identify the criteria for determining the indicator of efficiency of the city. In the article the author presents the results of ecological assessment of Moscow on the energy-ecological efficiency model, using the model defined performance indicators of the city to ensure environmental safety processes of life support of the city. The model takes into account the dependence of environmental safety parameters, environmental and

  1. Efficient hybrid evolutionary optimization of interatomic potential models.

    Science.gov (United States)

    Brown, W Michael; Thompson, Aidan P; Schultz, Peter A

    2010-01-14

    The lack of adequately predictive atomistic empirical models precludes meaningful simulations for many materials systems. We describe advances in the development of a hybrid, population based optimization strategy intended for the automated development of material specific interatomic potentials. We compare two strategies for parallel genetic programming and show that the Hierarchical Fair Competition algorithm produces better results in terms of transferability, despite a lower training set accuracy. We evaluate the use of hybrid local search and several fitness models using system energies and/or particle forces. We demonstrate a drastic reduction in the computation time with the use of a correlation-based fitness statistic. We show that the problem difficulty increases with the number of atoms present in the systems used for model development and demonstrate that vectorization can help to address this issue. Finally, we show that with the use of this method, we are able to "rediscover" the exact model for simple known two- and three-body interatomic potentials using only the system energies and particle forces from the supplied atomic configurations.

  2. Computationally efficient models of neuromuscular recruitment and mechanics

    Science.gov (United States)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  3. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  4. Cell model for efficient simulation of wave propagation in human ventricular tissue under normal and pathological conditions

    Science.gov (United States)

    Ten Tusscher, K. H. W. J.; Panfilov, A. V.

    2006-12-01

    In this paper, we formulate a model for human ventricular cells that is efficient enough for whole organ arrhythmia simulations yet detailed enough to capture the effects of cell level processes such as current blocks and channelopathies. The model is obtained from our detailed human ventricular cell model by using mathematical techniques to reduce the number of variables from 19 to nine. We carefully compare our full and reduced model at the single cell, cable and 2D tissue level and show that the reduced model has a very similar behaviour. Importantly, the new model correctly produces the effects of current blocks and channelopathies on AP and spiral wave behaviour, processes at the core of current day arrhythmia research. The new model is well over four times more efficient than the full model. We conclude that the new model can be used for efficient simulations of the effects of current changes on arrhythmias in the human heart.

  5. Cell model for efficient simulation of wave propagation in human ventricular tissue under normal and pathological conditions

    Energy Technology Data Exchange (ETDEWEB)

    Tusscher, K H W J Ten; Panfilov, A V [Department of Theoretical Biology, Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands)

    2006-12-07

    In this paper, we formulate a model for human ventricular cells that is efficient enough for whole organ arrhythmia simulations yet detailed enough to capture the effects of cell level processes such as current blocks and channelopathies. The model is obtained from our detailed human ventricular cell model by using mathematical techniques to reduce the number of variables from 19 to nine. We carefully compare our full and reduced model at the single cell, cable and 2D tissue level and show that the reduced model has a very similar behaviour. Importantly, the new model correctly produces the effects of current blocks and channelopathies on AP and spiral wave behaviour, processes at the core of current day arrhythmia research. The new model is well over four times more efficient than the full model. We conclude that the new model can be used for efficient simulations of the effects of current changes on arrhythmias in the human heart.

  6. Nitrogen-efficient and nitrogen-inefficient Indian mustard cultivars show differential protein expression in response to elevated CO2 and low nitrogen

    Directory of Open Access Journals (Sweden)

    Peerjada Yasir Yousof

    2016-07-01

    Full Text Available Carbon (C and nitrogen (N are two essential elements that influence plant growth and development. The C and N metabolic pathways influence each other to affect gene expression, but little is known about which genes are regulated by interaction between C and N or the mechanisms by which the pathways interact. In the present investigation, proteome analysis of N-efficient and N-inefficient Indian mustard, grown under varied combinations of low-N, sufficient-N, ambient [CO2] and elevated [CO2] was carried out to identify proteins and the encoding genes of the interactions between C and N. Two-dimensional gel electrophoresis (2-DE revealed 158 candidate protein spots. Among these, 72 spots were identified by matrix-assisted laser desorption ionization-time of flight/time of flight mass spectrometry (MALDI-TOF/TOF. The identified proteins are related to various molecular processes including photosynthesis, energy metabolism, protein synthesis, transport and degradation, signal transduction, nitrogen metabolism and defense to oxidative, water and heat stresses. Identification of proteins like PII-like protein, cyclophilin, elongation factor-TU, oxygen-evolving enhancer protein and rubisco activase offers a peculiar overview of changes elicited by elevated [CO2], providing clues about how N-efficient cultivar of Indian mustard adapt to low N supply under elevated [CO2] conditions. This study provides new insights and novel information for a better understanding of adaptive responses to elevated [CO2] under N deficiency in Indian mustard.

  7. 3D CFD validation of invert trap efficiency for sewer solid management using VOF model

    Directory of Open Access Journals (Sweden)

    Mohammad Mohsin

    2016-04-01

    Full Text Available Earlier investigators have numerically carried out performance analysis of the invert trap fitted in an open channel using the stochastic discrete phase model (DPM by assuming the open channel flow to be closed conduit flow under pressure and assuming zero shear stress at the top wall. This is known as the fixed lid model. By assuming the top wall to be a shear free wall, they have been able to show that the velocity distribution looks similar to that of an open channel flow with zero velocity at the bottom and maximum velocity at the top, representing the free water surface, but no information has been provided for the pressure at the free water surface. Because of this assumption, the validation of the model in predicting the trap efficiency has performed significantly poorly. In addition, the free water surface subject to zero gauge pressure cannot be modeled using the fixed lid model because there is no provision of extra space in the form of air space for the fluctuating part of the water surface profile. It can, however, be modeled using the volume of fluid (VOF model because the VOF model is the appropriate model for open channel or free surface flow. Therefore, in the present study, three-dimensional (3D computational fluid dynamics (CFD modeling with the VOF model, which considers open channel flow with a free water surface, along with the stochastic DPM, was used to model the trap efficiency of an invert trap fitted in an open rectangular channel. The governing mathematical flow equations of the VOF model were solved using the ANSYS Fluent 14.0 software, reproducing the experimental conditions exactly. The results show that the 3D CFD predictions using the VOF model closely fit the experimental data for glass bead particles.

  8. Replaceable Substructures for Efficient Part-Based Modeling

    KAUST Repository

    Liu, Han

    2015-05-01

    A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  9. Examining the determinants of efficiency using a latent class stochastic frontier model

    Directory of Open Access Journals (Sweden)

    Michael Danquah

    2015-12-01

    Full Text Available In this study, we combine the latent class stochastic frontier model with the complex time decay model to form a single-stage approach that accounts for unobserved technological differences to estimate efficiency and the determinants of efficiency. In this way, we contribute to the literature by estimating “pure” efficiency and determinants of productive units based on the class structure. An application of this proposed model is presented using data on the Ghanaian banking system. Our results show that inefficiency effects on the productive unit are specific to the class structure of the productive unit and therefore assuming a common technology for all productive units as is in the popular Battese and Coelli model used extensively in the literature may be misleading. The study therefore provides useful empirical evidence on the importance of accounting for unobserved technological differences across productive units. A policy based on the identified classes of the productive unit enables a more accurate and effectual measures to address efficiency challenges within the banking industry, thereby promoting financial sector development and economic growth.

  10. THE MODEL FOR POWER EFFICIENCY ASSESSMENT OF CONDENSATION HEATING INSTALLATIONS

    Directory of Open Access Journals (Sweden)

    D. Kovalchuk

    2017-11-01

    Full Text Available The main part of heating systems and domestic hot water systems are based on the natural gas boilers. Forincreasing the overall performance of such heating system the condensation gas boilers was developed and are used. Howevereven such type of boilers don't use all energy which is released from a fuel combustion. The main factors influencing thelowering of overall performance of condensation gas boilers in case of operation in real conditions are considered. Thestructure of the developed mathematical model allowing estimating the overall performance of condensation gas boilers(CGB in the conditions of real operation is considered. Performace evaluation computer experiments of such CGB during aheating season for real weather conditions of two regions of Ukraine was made. Graphic dependences of temperatureconditions and heating system effectiveness change throughout a heating season are given. It was proved that normal CGBdoes not completely use all calorific value of fuel, thus, it isn't effective. It was also proved that the efficiency of such boilerssignificantly changes during a heating season depending on weather conditions and doesn't reach the greatest possible value.The possibility of increasing the efficiency of CGB due to hydraulic division of heating and condensation sections and use ofthe vapor-compression heat pump for deeper cooling of combustion gases and removing of the highest possible amount ofthermal energy from them are considered. The scheme of heat pump connection to the heating system with a convenient gasboiler and the separate condensation economizer allowing to cool combustion gases deeply below a dew point and to warm upthe return heat carrier before a boiler input is provided. The technological diagram of the year-round use of the heat pump forhot water heating after the end of heating season, without gas use is offered.

  11. Optimization and Modeling Studies for Obtaining High Injection Efficiency at the Advanced Photon Source

    CERN Document Server

    Emery, Louis

    2005-01-01

    In recent years, the optics of the Advanced Photon Source storage ring has changed to lower equilibrium emittance (2.5 nm-rad) but at the cost of stronger sextupoles and stronger nonlinearities, which have reduced the injection efficiency from 100% in the high emittance mode. Over the years we have developed a series of optimization, measurement and modeling studies of the injection process, which allows us to obtain or maintain low injection losses. For example, the trajectory in the storage ring is optimized with trajectory knobs for maximum injection efficiency. This can be followed by collecting first-turn trajectory data, from which we can fit the initial phase-space coordinates. The model of the "optimized" trajectory would show whether the beam comes too close to a physical aperture in the injection magnets. Another modeling step is the fit and correction of the transfer line optics, which has a significant impact on phase-space matching.

  12. Modeling and analyzing of nuclear power peer review on enterprise operational efficiency

    International Nuclear Information System (INIS)

    Li Songbai; Xu Yuhu

    2007-01-01

    Based on the practice and analysis of peer review in nuclear power plants, the models on the Pareto improvement of peer review, and governance entropy decrease of peer review are set up and discussed. It shows that the peer review of nuclear power is actually a process of Pareto improvement, and of governance entropy decrease. It's a process of improvement of the enterprise operational efficiency. (authors)

  13. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... of Huang et al. (2008). Furthermore, the number of relevant variables is allowed to be larger than the sample size....

  14. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  15. Hong Kong Hospital Authority resource efficiency evaluation: Via a novel DEA-Malmquist model and Tobit regression model.

    Science.gov (United States)

    Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung

    2017-01-01

    The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals.

  16. Incorporating C60 as Nucleation Sites Optimizing PbI2 Films To Achieve Perovskite Solar Cells Showing Excellent Efficiency and Stability via Vapor-Assisted Deposition Method.

    Science.gov (United States)

    Chen, Hai-Bin; Ding, Xi-Hong; Pan, Xu; Hayat, Tasawar; Alsaedi, Ahmed; Ding, Yong; Dai, Song-Yuan

    2018-01-24

    To achieve high-quality perovskite solar cells (PSCs), the morphology and carrier transportation of perovskite films need to be optimized. Herein, C 60 is employed as nucleation sites in PbI 2 precursor solution to optimize the morphology of perovskite films via vapor-assisted deposition process. Accompanying the homogeneous nucleation of PbI 2 , the incorporation of C 60 as heterogeneous nucleation sites can lower the nucleation free energy of PbI 2 , which facilitates the diffusion and reaction between PbI 2 and organic source. Meanwhile, C 60 could enhance carrier transportation and reduce charge recombination in the perovskite layer due to its high electron mobility and conductivity. In addition, the grain sizes of perovskite get larger with C 60 optimizing, which can reduce the grain boundaries and voids in perovskite and prevent the corrosion because of moisture. As a result, we obtain PSCs with a power conversion efficiency (PCE) of 18.33% and excellent stability. The PCEs of unsealed devices drop less than 10% in a dehumidification cabinet after 100 days and remain at 75% of the initial PCE during exposure to ambient air (humidity > 60% RH, temperature > 30 °C) for 30 days.

  17. A framework for fuzzy model of thermoradiotherapy efficiency

    International Nuclear Information System (INIS)

    Kosterev, V.V.; Averkin, A.N.

    2005-01-01

    Full text: The use of hyperthermia as an adjuvant to radiation in the treatment of local and regional disease currently offers the most significant advantages. For processing of information of thermo radiotherapy efficiency, it is expedient to use the fuzzy logic based decision-support system - fuzzy system (FS). FSs are widely used in various application areas of control and decision making. Their popularity is due to the following reasons. Firstly, FS with triangular membership functions is universal approximator. Secondly, the designing of FS does not need the exact model of the process, but needs only qualitative linguistic dependences between the parameters. Thirdly, there are many program and hardware realizations of FS with very high speed of calculations. Fourthly, accuracy of the decisions received based on FS, usually is not worse and sometimes is better than accuracy of the decisions received by traditional methods. Moreover, dependence between input and output variables can be easily expressed in linguistic scales. The goal of this research is to choose the data fusion RULE's operators suitable to experimental results and taking into consideration uncertainty factor. Methods of aggregation and data fusion might be used which provide a methodology to extract comprehensible rules from data. Several data fusion algorithms have been developed and applied, individually and in combination, providing users with various levels of informational detail. In reviewing these emerging technology three basic categories (levels) of data fusion has been developed. These fusion levels are differentiated according to the amount of information they provide. Refs. 2 (author)

  18. Modeling of Glass Making Processes for Improved Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Thomas P. Seward III

    2003-03-31

    The overall goal of this project was to develop a high-temperature melt properties database with sufficient reliability to allow mathematical modeling of glass melting and forming processes for improved product quality, improved efficiency and lessened environmental impact. It was initiated by the United States glass industry through the NSF Industry/University Center for Glass Research (CGR) at Alfred University [1]. Because of their important commercial value, six different types/families of glass were studied: container, float, fiberglass (E- and wool-types), low-expansion borosilicate, and color TV panel glasses. CGR member companies supplied production-quality glass from all six families upon which we measured, as a function of temperature in the molten state, density, surface tension, viscosity, electrical resistivity, infrared transmittance (to determine high temperature radiative conductivity), non-Newtonian flow behavior, and oxygen partial pres sure. With CGR cost sharing, we also studied gas solubility and diffusivity in each of these glasses. Because knowledge of the compositional dependencies of melt viscosity and electrical resistivity are extremely important for glass melting furnace design and operation, these properties were studied more fully. Composition variations were statistically designed for all six types/families of glass. About 140 different glasses were then melted on a laboratory scale and their viscosity and electrical resistivity measured as a function of temperature. The measurements were completed in February 2003 and are reported on here. The next steps will be (1) to statistically analyze the compositional dependencies of viscosity and electrical resistivity and develop composition-property response surfaces, (2) submit all the data to CGR member companies to evaluate the usefulness in their models, and (3) publish the results in technical journals and most likely in book form.

  19. UV-C as an efficient means to combat biofilm formation in show caves: evidence from the La Glacière Cave (France) and laboratory experiments.

    Science.gov (United States)

    Pfendler, Stéphane; Einhorn, Olympe; Karimi, Battle; Bousta, Faisl; Cailhol, Didier; Alaoui-Sosse, Laurence; Alaoui-Sosse, Badr; Aleya, Lotfi

    2017-11-01

    Ultra-violet C (UV-C) treatment is commonly used in sterilization processes in industry, laboratories, and hospitals, showing its efficacy against microorganisms such as bacteria, algae, or fungi. In this study, we have eradicated for the first time all proliferating biofilms present in a show cave (the La Glacière Cave, Chaux-lès-Passavant, France). Colorimetric measurements of irradiated biofilms were then monitored for 21 months. To understand the importance of exposition of algae to light just after UV radiation, similar tests were carried out in laboratory conditions. Since UV-C can be deleterious for biofilm support, especially parietal painting, we investigated their effects on prehistoric pigment. Results showed complete eradication of cave biofilms with no algae proliferation observed after 21 months. Moreover, quantum yield results showed a decrease directly after UV-C treatment, indicating inhibition of algae photosynthesis. Furthermore, no changes in pigment color nor in chemical and crystalline properties has been demonstrated. The present findings demonstrate that the UV-C method can be considered environmentally friendly and the best alternative to chemicals. This inexpensive and easily implemented method is advantageous for cave owners and managers.

  20. Plectasin shows intracellular activity against Staphylococcus aureus in human THP-1 monocytes and in a mouse peritonitis model

    DEFF Research Database (Denmark)

    Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre

    2009-01-01

    Antimicrobial therapy of infections with Staphylococcus aureus can pose a challenge due to slow response to therapy and recurrence of infection. These treatment difficulties can partly be explained by intracellular survival of staphylococci, which is why the intracellular activity...... was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...

  1. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  2. The relative efficiency of Iranian’s rural traffic police: a three-stage DEA model

    Directory of Open Access Journals (Sweden)

    Habibollah Rahimi

    2017-10-01

    Full Text Available Abstract Background Road traffic Injuries (RTIs as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran’s rural traffic police efficiency. Methods This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. Results In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation of damage, injury and fatal accidents were 247.7 (258.4, 184.9 (176.9, and 28.7 (19.5, respectively. Input averages were 5.9 (3.0 patrol teams, 0.5% (0.2 manpower proportions, 7.5 (2.9 patrol cars, 0.5 (1.3 motorcycles, 77,279.1 (46,794.7 penalties, 90.9 (2.8 cultural and educational activity score, 0.7 (2.4 speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Conclusions

  3. The relative efficiency of Iranian's rural traffic police: a three-stage DEA model.

    Science.gov (United States)

    Rahimi, Habibollah; Soori, Hamid; Nazari, Seyed Saeed Hashemi; Motevalian, Seyed Abbas; Azar, Adel; Momeni, Eskandar; Javartani, Mehdi

    2017-10-13

    Road traffic Injuries (RTIs) as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran's rural traffic police efficiency. This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA) model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation) of damage, injury and fatal accidents were 247.7 (258.4), 184.9 (176.9), and 28.7 (19.5), respectively. Input averages were 5.9 (3.0) patrol teams, 0.5% (0.2) manpower proportions, 7.5 (2.9) patrol cars, 0.5 (1.3) motorcycles, 77,279.1 (46,794.7) penalties, 90.9 (2.8) cultural and educational activity score, 0.7 (2.4) speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Iran's traffic police has potential opportunity to reduce

  4. Climate Modelling Shows Increased Risk to Eucalyptus sideroxylon on the Eastern Coast of Australia Compared to Eucalyptus albens.

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit; Ahmadi, Mohsen

    2017-11-24

    Aim: To identify the extent and direction of range shift of Eucalyptus sideroxylon and E. albens in Australia by 2050 through an ensemble forecast of four species distribution models (SDMs). Each was generated using four global climate models (GCMs), under two representative concentration pathways (RCPs). Location: Australia. Methods : We used four SDMs of (i) generalized linear model, (ii) MaxEnt, (iii) random forest, and (iv) boosted regression tree to construct SDMs for species E. sideroxylon and E. albens under four GCMs including (a) MRI-CGCM3, (b) MIROC5, (c) HadGEM2-AO and (d) CCSM4, under two RCPs of 4.5 and 6.0. Here, the true skill statistic (TSS) index was used to assess the accuracy of each SDM. Results: Results showed that E. albens and E. sideroxylon will lose large areas of their current suitable range by 2050 and E. sideroxylon is projected to gain in eastern and southeastern Australia. Some areas were also projected to remain suitable for each species between now and 2050. Our modelling showed that E. sideroxylon will lose suitable habitat on the western side and will not gain any on the eastern side because this region is one the most heavily populated areas in the country, and the populated areas are moving westward. The predicted decrease in E. sideroxylon's distribution suggests that land managers should monitor its population closely, and evaluate whether it meets criteria for a protected legal status. Main conclusions: Both Eucalyptus sideroxylon and E. albens will be negatively affected by climate change and it is projected that E. sideroxylon will be at greater risk of losing habitat than E. albens .

  5. Climate Modelling Shows Increased Risk to Eucalyptus sideroxylon on the Eastern Coast of Australia Compared to Eucalyptus albens

    Directory of Open Access Journals (Sweden)

    Farzin Shabani

    2017-11-01

    Full Text Available Aim: To identify the extent and direction of range shift of Eucalyptus sideroxylon and E. albens in Australia by 2050 through an ensemble forecast of four species distribution models (SDMs. Each was generated using four global climate models (GCMs, under two representative concentration pathways (RCPs. Location: Australia. Methods: We used four SDMs of (i generalized linear model, (ii MaxEnt, (iii random forest, and (iv boosted regression tree to construct SDMs for species E. sideroxylon and E. albens under four GCMs including (a MRI-CGCM3, (b MIROC5, (c HadGEM2-AO and (d CCSM4, under two RCPs of 4.5 and 6.0. Here, the true skill statistic (TSS index was used to assess the accuracy of each SDM. Results: Results showed that E. albens and E. sideroxylon will lose large areas of their current suitable range by 2050 and E. sideroxylon is projected to gain in eastern and southeastern Australia. Some areas were also projected to remain suitable for each species between now and 2050. Our modelling showed that E. sideroxylon will lose suitable habitat on the western side and will not gain any on the eastern side because this region is one the most heavily populated areas in the country, and the populated areas are moving westward. The predicted decrease in E. sideroxylon’s distribution suggests that land managers should monitor its population closely, and evaluate whether it meets criteria for a protected legal status. Main conclusions: Both Eucalyptus sideroxylon and E. albens will be negatively affected by climate change and it is projected that E. sideroxylon will be at greater risk of losing habitat than E. albens.

  6. Experimentally infected domestic ducks show efficient transmission of Indonesian H5N1 highly pathogenic avian influenza virus, but lack persistent viral shedding.

    Directory of Open Access Journals (Sweden)

    Hendra Wibawa

    Full Text Available Ducks are important maintenance hosts for avian influenza, including H5N1 highly pathogenic avian influenza viruses. A previous study indicated that persistence of H5N1 viruses in ducks after the development of humoral immunity may drive viral evolution following immune selection. As H5N1 HPAI is endemic in Indonesia, this mechanism may be important in understanding H5N1 evolution in that region. To determine the capability of domestic ducks to maintain prolonged shedding of Indonesian clade 2.1 H5N1 virus, two groups of Pekin ducks were inoculated through the eyes, nostrils and oropharynx and viral shedding and transmission investigated. Inoculated ducks (n = 15, which were mostly asymptomatic, shed infectious virus from the oral route from 1 to 8 days post inoculation, and from the cloacal route from 2-8 dpi. Viral ribonucleic acid was detected from 1-15 days post inoculation from the oral route and 1-24 days post inoculation from the cloacal route (cycle threshold <40. Most ducks seroconverted in a range of serological tests by 15 days post inoculation. Virus was efficiently transmitted during acute infection (5 inoculation-infected to all 5 contact ducks. However, no evidence for transmission, as determined by seroconversion and viral shedding, was found between an inoculation-infected group (n = 10 and contact ducks (n = 9 when the two groups only had contact after 10 days post inoculation. Clinical disease was more frequent and more severe in contact-infected (2 of 5 than inoculation-infected ducks (1 of 15. We conclude that Indonesian clade 2.1 H5N1 highly pathogenic avian influenza virus does not persist in individual ducks after acute infection.

  7. Coadministration of doxorubicin and etoposide loaded in camel milk phospholipids liposomes showed increased antitumor activity in a murine model

    Directory of Open Access Journals (Sweden)

    Maswadeh HM

    2015-04-01

    Full Text Available Hamzah M Maswadeh,1 Ahmed N Aljarbou,1 Mohammed S Alorainy,2 Arshad H Rahmani,3 Masood A Khan3 1Department of Pharmaceutics, College of Pharmacy, 2Department of Pharmacology and Therapeutics, College of Medicine, 3College of Applied Medical Sciences, Qassim University, Buraydah, Kingdom of Saudi Arabia Abstract: Small unilamellar vesicles from camel milk phospholipids (CML mixture or from 1,2 dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC were prepared, and anticancer drugs doxorubicin (Dox or etoposide (ETP were loaded. Liposomal formulations were used against fibrosarcoma in a murine model. Results showed a very high percentage of Dox encapsulation (~98% in liposomes (Lip prepared from CML-Lip or DPPC-Lip, whereas the percentage of encapsulations of ETP was on the lower side, 22% of CML-Lip and 18% for DPPC-Lip. Differential scanning calorimetry curves show that Dox enhances the lamellar formation in CML-Lip, whereas ETP enhances the nonlamellar formation. Differential scanning calorimetry curves also showed that the presence of Dox and ETP together into DPPC-Lip produced the interdigitation effect. The in vivo anticancer activity of liposomal formulations of Dox or ETP or a combination of both was assessed against benzopyrene (BAP-induced fibrosarcoma in a murine model. Tumor-bearing mice treated with a combination of Dox and ETP loaded into CML-Lip showed increased survival and reduced tumor growth compared to other groups, including the combination of Dox and ETP in DPPC-Lip. Fibrosarcoma-bearing mice treated with a combination of free (Dox + ETP showed much higher tumor growth compared to those groups treated with CML-Lip-(Dox + ETP or DPPC-Lip-(Dox + ETP. Immunohistochemical study was also performed to show the expression of tumor-suppressor PTEN, and it was found that the tumor tissues from the group of mice treated with a combination of free (Dox + ETP showed greater loss of cytoplasmic PTEN than tumor tissues obtained from the

  8. Connecting with The Biggest Loser: an extended model of parasocial interaction and identification in health-related reality TV shows.

    Science.gov (United States)

    Tian, Yan; Yoo, Jina H

    2015-01-01

    This study investigates audience responses to health-related reality TV shows in the setting of The Biggest Loser. It conceptualizes a model for audience members' parasocial interaction and identification with cast members and explores antecedents and outcomes of parasocial interaction and identification. Data analysis suggests the following direct relationships: (1) audience members' exposure to the show is positively associated with parasocial interaction, which in turn is positively associated with identification, (2) parasocial interaction is positively associated with exercise self-efficacy, whereas identification is negatively associated with exercise self-efficacy, and (3) exercise self-efficacy is positively associated with exercise behavior. Indirect effects of parasocial interaction and identification on exercise self-efficacy and exercise behavior are also significant. We discuss the theoretical and practical implications of these findings.

  9. A computationally efficient method for nonparametric modeling of neural spiking activity with point processes.

    Science.gov (United States)

    Coleman, Todd P; Sarma, Sridevi S

    2010-08-01

    Point-process models have been shown to be useful in characterizing neural spiking activity as a function of extrinsic and intrinsic factors. Most point-process models of neural activity are parametric, as they are often efficiently computable. However, if the actual point process does not lie in the assumed parametric class of functions, misleading inferences can arise. Nonparametric methods are attractive due to fewer assumptions, but computation in general grows with the size of the data. We propose a computationally efficient method for nonparametric maximum likelihood estimation when the conditional intensity function, which characterizes the point process in its entirety, is assumed to be a Lipschitz continuous function but otherwise arbitrary. We show that by exploiting much structure, the problem becomes efficiently solvable. We next demonstrate a model selection procedure to estimate the Lipshitz parameter from data, akin to the minimum description length principle and demonstrate consistency of our estimator under appropriate assumptions. Finally, we illustrate the effectiveness of our method with simulated neural spiking data, goldfish retinal ganglion neural data, and activity recorded in CA1 hippocampal neurons from an awake behaving rat. For the simulated data set, our method uncovers a more compact representation of the conditional intensity function when it exists. For the goldfish and rat neural data sets, we show that our nonparametric method gives a superior absolute goodness-of-fit measure used for point processes than the most common parametric and splines-based approaches.

  10. Collaborative Business Models for Energy Efficient Solutions An Exploratory Analysis of Danish and German Manufacturers

    DEFF Research Database (Denmark)

    Boyd, Britta; Brem, Alexander; Bogers, Marcel

    , Amit, & Massa, 2011). Especially in the face of the grand challenge of climate change, looking for energy efficient solutions offers particular opportunities to businesses to stay competitive. This is an opportunity not only for big companies in metropolitan areas, but also for small and medium sized......The growing dynamics of innovation and productivity affect businesses in most industries and countries. Companies face these challenges by constantly developing new technologies and business models - the logic with which they create and capture value (Afuah, 2014; Osterwalder & Pigneur, 2010; Zott...... more energy efficient solutions seems to pay off. News items in media like “The Guardian” show at least that there is supraregional attention (Guardian, 2015). We tap into these developments through a study of exploring collaborations and business models that have the potential to increase energy...

  11. Evaluation on the efficiency of the construction sector companies in Malaysia with data envelopment analysis model

    Science.gov (United States)

    Weng Hoe, Lam; Jinn, Lim Shun; Weng Siew, Lam; Hai, Tey Kim

    2018-04-01

    In Malaysia, construction sector is essential parts in driving the development of the Malaysian economy. Construction industry is an economic investment and its relationship with economic development is well posited. However, the evaluation on the efficiency of the construction sectors companies listed in Kuala Lumpur Stock Exchange (KLSE) with Data Analysis Envelopment (DEA) model have not been actively studied by the past researchers. Hence the purpose of this study is to examine the financial performance the listed construction sectors companies in Malaysia in the year of 2015. The results of this study show that the efficiency of construction sectors companies can be obtained by using DEA model through ratio analysis which defined as the ratio of total outputs to total inputs. This study is significant because the inefficient companies are identified for potential improvement.

  12. An efficient voice activity detection algorithm by combining statistical model and energy detection

    Science.gov (United States)

    Wu, Ji; Zhang, Xiao-Lei

    2011-12-01

    In this article, we present a new voice activity detection (VAD) algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

  13. An efficient voice activity detection algorithm by combining statistical model and energy detection

    Directory of Open Access Journals (Sweden)

    Wu Ji

    2011-01-01

    Full Text Available Abstract In this article, we present a new voice activity detection (VAD algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

  14. Restless led syndrome model Drosophila melanogaster show successful olfactory learning and 1-day retention of the acquired memory

    Directory of Open Access Journals (Sweden)

    Mika F. Asaba

    2013-09-01

    Full Text Available Restless Legs Syndrome (RLS is a prevalent but poorly understood disorder that ischaracterized by uncontrollable movements during sleep, resulting in sleep disturbance.Olfactory memory in Drosophila melanogaster has proven to be a useful tool for the study ofcognitive deficits caused by sleep disturbances, such as those seen in RLS. A recently generatedDrosophila model of RLS exhibited disturbed sleep patterns similar to those seen in humans withRLS. This research seeks to improve understanding of the relationship between cognitivefunctioning and sleep disturbances in a new model for RLS. Here, we tested learning andmemory in wild type and dBTBD9 mutant flies by Pavlovian olfactory conditioning, duringwhich a shock was paired with one of two odors. Flies were then placed in a T-maze with oneodor on either side, and successful associative learning was recorded when the flies chose theside with the unpaired odor. We hypothesized that due to disrupted sleep patterns, dBTBD9mutant flies would be unable to learn the shock-odor association. However, the current studyreports that the recently generated Drosophila model of RLS shows successful olfactorylearning, despite disturbed sleep patterns, with learning performance levels matching or betterthan wild type flies.

  15. Efficient material flow in mixed model assembly lines.

    Science.gov (United States)

    Alnahhal, Mohammed; Noche, Bernd

    2013-01-01

    In this study, material flow from decentralized supermarkets to stations in mixed model assembly lines using tow (tugger) trains is investigated. Train routing, scheduling, and loading problems are investigated in parallel to minimize the number of trains, variability in loading and in routes lengths, and line-side inventory holding costs. The general framework for solving these problems in parallel contains analytical equations, Dynamic Programming (DP), and Mixed Integer Programming (MIP). Matlab in conjunction with LP-solve software was used to formulate the problem. An example was presented to explain the idea. Results which were obtained in very short CPU time showed the effect of using time buffer among routes on the feasible space and on the optimal solution. Results also showed the effect of the objective, concerning reducing the variability in loading, on the results of routing, scheduling, and loading. Moreover, results showed the importance of considering the maximum line-side inventory beside the capacity of the train in the same time in finding the optimal solution.

  16. The BACHD Rat Model of Huntington Disease Shows Specific Deficits in a Test Battery of Motor Function

    Directory of Open Access Journals (Sweden)

    Giuseppe Manfré

    2017-11-01

    Full Text Available Rationale: Huntington disease (HD is a progressive neurodegenerative disorder characterized by motor, cognitive and neuropsychiatric symptoms. HD is usually diagnosed by the appearance of motor deficits, resulting in skilled hand use disruption, gait abnormality, muscle wasting and choreatic movements. The BACHD transgenic rat model for HD represents a well-established transgenic rodent model of HD, offering the prospect of an in-depth characterization of the motor phenotype.Objective: The present study aims to characterize different aspects of motor function in BACHD rats, combining classical paradigms with novel high-throughput behavioral phenotyping.Methods: Wild-type (WT and transgenic animals were tested longitudinally from 2 to 12 months of age. To measure fine motor control, rats were challenged with the pasta handling test and the pellet reaching test. To evaluate gross motor function, animals were assessed by using the holding bar and the grip strength tests. Spontaneous locomotor activity and circadian rhythmicity were assessed in an automated home-cage environment, namely the PhenoTyper. We then integrated existing classical methodologies to test motor function with automated home-cage assessment of motor performance.Results: BACHD rats showed strong impairment in muscle endurance at 2 months of age. Altered circadian rhythmicity and locomotor activity were observed in transgenic animals. On the other hand, reaching behavior, forepaw dexterity and muscle strength were unaffected.Conclusions: The BACHD rat model exhibits certain features of HD patients, like muscle weakness and changes in circadian behavior. We have observed modest but clear-cut deficits in distinct motor phenotypes, thus confirming the validity of this transgenic rat model for treatment and drug discovery purposes.

  17. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  18. A belgian multicenter prospective observational cohort study shows safe and efficient use of a composite mesh with incorporated oxidized regenerated cellulose in laparoscopic ventral hernia repair.

    Science.gov (United States)

    Berrevoet, F; Tollens, T; Berwouts, L; Bertrand, C; Muysoms, F; De Gols, J; Meir, E; De Backer, A

    2014-01-01

    A variety of anti-adhesive composite mesh products have become available to use inside the peritoneal cavity. However, reimbursement of these meshes by the Belgian Governemental Health Agency (RIZIV/INAMI) can only be obtained after conducting a prospective study with at least one year of clinical follow-up. This -Belgian multicentric cohort study evaluated the experience with the use of Proceed®-mesh in laparoscopic ventral hernia repair. During a 25 month period 210 adult patients underwent a laparoscopic primary or incisional hernia repair using an intra-abdominal placement of Proceed®-mesh. According to RIZIV/INAMI criteria recurrence rate after 1 year was the primary objective, while postoperative morbidity, including seroma formation, wound and mesh infections, quality of life and recurrences after 2 years were evaluated as secondary endpoints (NCT00572962). In total 97 primary ventral and 103 incisional hernias were repaired, of which 28 (13%) were recurrent. There were no conversions to open repair, no enterotomies, no mesh infections and no mortality. One year cumulative follow-up showed 10 recurrences (n = 192, 5.2%) and chronic discomfort or pain in 4.7% of the patients. Quality of life could not be analyzed due to incomplete data set. More than 5 years after introduction of this mesh to the market, this prospective multicentric study documents a favorable experience with the Proceed mesh in laparoscopic ventral hernia repair. However, it remains to be discussed whether reimbursement of these meshes in Belgium should be limited to the current strict criteria and therefore can only be obtained after at least 3-4 years of clinical data gathering and necessary follow-up. Copyright© Acta Chirurgica Belgica.

  19. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task......Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  20. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    Directory of Open Access Journals (Sweden)

    J. G. Barr

    2013-03-01

    Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and

  1. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    Science.gov (United States)

    Barr, J.G.; Engel, V.; Fuentes, J.D.; Fuller, D.O.; Kwon, H.

    2013-01-01

    Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC) systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE), and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information about

  2. Energy efficiency analysis method based on fuzzy DEA cross-model for ethylene production systems in chemical industry

    International Nuclear Information System (INIS)

    Han, Yongming; Geng, Zhiqiang; Zhu, Qunxiong; Qu, Yixin

    2015-01-01

    DEA (data envelopment analysis) has been widely used for the efficiency analysis of industrial production process. However, the conventional DEA model is difficult to analyze the pros and cons of the multi DMUs (decision-making units). The DEACM (DEA cross-model) can distinguish the pros and cons of the effective DMUs, but it is unable to take the effect of the uncertainty data into account. This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with Fuzzy Data. The proposed method has better objectivity and resolving power for the decision-making. First we obtain the minimum, the median and the maximum values of the multi-criteria ethylene energy consumption data by the data fuzzification. On the basis of the multi-criteria fuzzy data, the benchmark of the effective production situations and the improvement directions of the ineffective of the ethylene plants under different production data configurations are obtained by the FDEACM. The experimental result shows that the proposed method can improve the ethylene production conditions and guide the efficiency of energy utilization during ethylene production process. - Highlights: • This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with data fuzzification. • The proposed method is more efficient and accurate than other methods. • We obtain an energy efficiency analysis framework and process based on FDEACM in ethylene production industry. • The proposed method is valid and efficient in improvement of energy efficiency in the ethylene plants

  3. BO-1055, a novel DNA cross-linking agent with remarkable low myelotoxicity shows potent activity in sarcoma models.

    Science.gov (United States)

    Ambati, Srikanth R; Shieh, Jae-Hung; Pera, Benet; Lopes, Eloisi Caldas; Chaudhry, Anisha; Wong, Elissa W P; Saxena, Ashish; Su, Tsann-Long; Moore, Malcolm A S

    2016-07-12

    DNA damaging agents cause rapid shrinkage of tumors and form the basis of chemotherapy for sarcomas despite significant toxicities. Drugs having superior efficacy and wider therapeutic windows are needed to improve patient outcomes. We used cell proliferation and apoptosis assays in sarcoma cell lines and benign cells; γ-H2AX expression, comet assay, immunoblot analyses and drug combination studies in vitro and in patient derived xenograft (PDX) models. BO-1055 caused apoptosis and cell death in a concentration and time dependent manner in sarcoma cell lines. BO-1055 had potent activity (submicromolar IC50) against Ewing sarcoma and rhabdomyosarcoma, intermediate activity in DSRCT (IC50 = 2-3μM) and very weak activity in osteosarcoma (IC50 >10μM) cell lines. BO-1055 exhibited a wide therapeutic window compared to other DNA damaging drugs. BO-1055 induced more DNA double strand breaks and γH2AX expression in cancer cells compared to benign cells. BO-1055 showed inhibition of tumor growth in A673 xenografts and caused tumor regression in cyclophosphamide resistant patient-derived Ewing sarcoma xenografts and A204 xenografts. Combination of BO-1055 and irinotecan demonstrated synergism in Ewing sarcoma PDX models. Potent activity on sarcoma cells and its relative lack of toxicity presents a strong rationale for further development of BO-1055 as a therapeutic agent.

  4. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  5. Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking

    Directory of Open Access Journals (Sweden)

    Christian Appold

    2010-06-01

    Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.

  6. Demographical history and palaeodistribution modelling show range shift towards Amazon Basin for a Neotropical tree species in the LGM.

    Science.gov (United States)

    Vitorino, Luciana Cristina; Lima-Ribeiro, Matheus S; Terribile, Levi Carina; Collevatti, Rosane G

    2016-10-13

    We studied the phylogeography and demographical history of Tabebuia serratifolia (Bignoniaceae) to understand the disjunct geographical distribution of South American seasonally dry tropical forests (SDTFs). We specifically tested if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the last glacial maximum (LGM), the so called South American dry forest refugia hypothesis, using ecological niche modelling (ENM) and statistical phylogeography. We sampled 235 individuals of T. serratifolia in 17 populations in Brazil and analysed the polymorphisms at three intergenic chloroplast regions and ITS nuclear ribosomal DNA. Coalescent analyses showed a demographical expansion at the last c. 130 ka (thousand years before present). Simulations and ENM also showed that the current spatial pattern of genetic diversity is most likely due to a scenario of range expansion and range shift towards the Amazon Basin during the colder and arid climatic conditions associated with the LGM, matching the expected for the South American dry forest refugia hypothesis, although contrasting to the Pleistocene Arc hypothesis. Populations in more stable areas or with higher suitability through time showed higher genetic diversity. Postglacial range shift towards the Southeast and Atlantic coast may have led to spatial genome assortment due to leading edge colonization as the species tracks suitable environments, leading to lower genetic diversity in populations at higher distance from the distribution centroid at 21 ka. Haplotype sharing or common ancestry among populations from Caatinga in Northeast Brazil, Atlantic Forest in Southeast and Cerrado biome and ENM evince the past connection among these biomes.

  7. Large Differences in Terrestrial Vegetation Production Derived from Satellite-Based Light Use Efficiency Models

    Directory of Open Access Journals (Sweden)

    Wenwen Cai

    2014-09-01

    Full Text Available Terrestrial gross primary production (GPP is the largest global CO2 flux and determines other ecosystem carbon cycle variables. Light use efficiency (LUE models may have the most potential to adequately address the spatial and temporal dynamics of GPP, but recent studies have shown large model differences in GPP simulations. In this study, we investigated the GPP differences in the spatial and temporal patterns derived from seven widely used LUE models at the global scale. The result shows that the global annual GPP estimates over the period 2000–2010 varied from 95.10 to 139.71 Pg C∙yr−1 among models. The spatial and temporal variation of global GPP differs substantially between models, due to different model structures and dominant environmental drivers. In almost all models, water availability dominates the interannual variability of GPP over large vegetated areas. Solar radiation and air temperature are not the primary controlling factors for interannual variability of global GPP estimates for most models. The disagreement among the current LUE models highlights the need for further model improvement to quantify the global carbon cycle.

  8. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    Science.gov (United States)

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  9. A Model of Silicon Dynamics in Rice: An Analysis of the Investment Efficiency of Si Transporters

    Directory of Open Access Journals (Sweden)

    Gen Sakurai

    2017-07-01

    Full Text Available Silicon is the second most abundant element in soils and is beneficial for plant growth. Although, the localizations and polarities of rice Si transporters have been elucidated, the mechanisms that control the expression of Si transporter genes and the functional reasons for controlling expression are not well-understood. We developed a new model that simulates the dynamics of Si in the whole plant in rice by considering Si transport in the roots, distribution at the nodes, and signaling substances controlling transporter gene expression. To investigate the functional reason for the diurnal variation of the expression level, we compared investment efficiencies (the amount of Si accumulated in the upper leaf divided by the total expression level of Si transporter genes at different model settings. The model reproduced the gradual decrease and diurnal variation of the expression level of the transporter genes observed by previous experimental studies. The results of simulation experiments showed that a considerable reduction in the expression of Si transporter genes during the night increases investment efficiency. Our study suggests that rice has a system that maximizes the investment efficiency of Si uptake.

  10. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  11. A Model of Silicon Dynamics in Rice: An Analysis of the Investment Efficiency of Si Transporters.

    Science.gov (United States)

    Sakurai, Gen; Yamaji, Naoki; Mitani-Ueno, Namiki; Yokozawa, Masayuki; Ono, Keisuke; Ma, Jian Feng

    2017-01-01

    Silicon is the second most abundant element in soils and is beneficial for plant growth. Although, the localizations and polarities of rice Si transporters have been elucidated, the mechanisms that control the expression of Si transporter genes and the functional reasons for controlling expression are not well-understood. We developed a new model that simulates the dynamics of Si in the whole plant in rice by considering Si transport in the roots, distribution at the nodes, and signaling substances controlling transporter gene expression. To investigate the functional reason for the diurnal variation of the expression level, we compared investment efficiencies (the amount of Si accumulated in the upper leaf divided by the total expression level of Si transporter genes) at different model settings. The model reproduced the gradual decrease and diurnal variation of the expression level of the transporter genes observed by previous experimental studies. The results of simulation experiments showed that a considerable reduction in the expression of Si transporter genes during the night increases investment efficiency. Our study suggests that rice has a system that maximizes the investment efficiency of Si uptake.

  12. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....

  13. A Bioeconomic Foundation for the Nutrition-based Efficiency Wage Model

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model. B...

  14. A physiological foundation for the nutrition-based efficiency wage model

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    2011-01-01

    Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model. B...

  15. Efficient modeling of chiral media using SCN-TLM method

    Directory of Open Access Journals (Sweden)

    Yaich M.I.

    2004-01-01

    Full Text Available An efficient approach allowing to include linear bi-isotropic chiral materials in time-domain transmission line matrix (TLM calculations by employing recursive evaluation of the convolution of the electric and magnetic fields and susceptibility functions is presented. The new technique consists to add both voltage and current sources in supplementary stubs of the symmetrical condensed node (SCN of the TLM method. In this article, the details and the complete description of this approach are given. A comparison of the obtained numerical results with those of the literature reflects its validity and efficiency.

  16. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras

    Science.gov (United States)

    Andrade, Jefferson O.; Kameyama, Yukiyoshi

    Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In this paper, we propose a novel, efficient algorithm for multi-valued bounded model checking. A notable feature of our algorithm is that it is not based on reduction of multi-values into two-values; instead, it generates a single formula which represents multi-valuedness by a suitable encoding, and asks a standard SAT solver to check its satisfiability. Our experimental results show a significant improvement in the number of variables and clauses and also in execution time compared with the reduction-based one.

  17. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    -wise aggregation to derive the models. These models are initially created from the original data and are kept in the database along with it. Subsequent queries are answered using the stored models rather than scanning and processing the original datasets. In order to support model query processing, we maintain...

  18. Efficiency in Linear Model with AR (1) and Correlated Error ...

    African Journals Online (AJOL)

    Nekky Umera

    Assumptions in the classical normal linear regression model include that of lack of autocorrelation of the error terms ... which the classical linear regression model is based will usually be violated. These violations, seen in widespread .... we conclude in section 5. The Model. We assume a simple linear regression model:.

  19. Modeling shows that the NS5A inhibitor daclatasvir has two modes of action and yields a shorter estimate of the hepatitis C virus half-life.

    Science.gov (United States)

    Guedj, Jeremie; Dahari, Harel; Rong, Libin; Sansone, Natasha D; Nettles, Richard E; Cotler, Scott J; Layden, Thomas J; Uprichard, Susan L; Perelson, Alan S

    2013-03-05

    The nonstructural 5A (NS5A) protein is a target for drug development against hepatitis C virus (HCV). Interestingly, the NS5A inhibitor daclatasvir (BMS-790052) caused a decrease in serum HCV RNA levels by about two orders of magnitude within 6 h of administration. However, NS5A has no known enzymatic functions, making it difficult to understand daclatasvir's mode of action (MOA) and to estimate its antiviral effectiveness. Modeling viral kinetics during therapy has provided important insights into the MOA and effectiveness of a variety of anti-HCV agents. Here, we show that understanding the effects of daclatasvir in vivo requires a multiscale model that incorporates drug effects on the HCV intracellular lifecycle, and we validated this approach with in vitro HCV infection experiments. The model predicts that daclatasvir efficiently blocks two distinct stages of the viral lifecycle, namely viral RNA synthesis and virion assembly/secretion with mean effectiveness of 99% and 99.8%, respectively, and yields a more precise estimate of the serum HCV half-life, 45 min, i.e., around four times shorter than previous estimates. Intracellular HCV RNA in HCV-infected cells treated with daclatasvir and the HCV polymerase inhibitor NM107 showed a similar pattern of decline. However, daclatasvir treatment led to an immediate and rapid decline of extracellular HCV titers compared to a delayed (6-9 h) and slower decline with NM107, confirming an effect of daclatasvir on both viral replication and assembly/secretion. The multiscale modeling approach, validated with in vitro kinetic experiments, brings a unique conceptual framework for understanding the mechanism of action of a variety of agents in development for the treatment of HCV.

  20. Efficient parameterization of cardiac action potential models using a genetic algorithm.

    Science.gov (United States)

    Cairns, Darby I; Fenton, Flavio H; Cherry, E M

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  1. Efficient parameterization of cardiac action potential models using a genetic algorithm

    Science.gov (United States)

    Cairns, Darby I.; Fenton, Flavio H.; Cherry, E. M.

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  2. Models for Determining the Efficiency of Student Loans Policies

    Science.gov (United States)

    Dente, Bruno; Piraino, Nadia

    2011-01-01

    For both efficiency and equity reasons, student loans schemes have been introduced by several countries. Empirical work has been carried out in order to measure the effectiveness of these policies, but, with few exceptions, their results are not comparable because of their concentration on specific aspects. The present work suggests a…

  3. Modeling efficient resource allocation patterns for arable crop ...

    African Journals Online (AJOL)

    optimum plans. This should be complemented with strong financial support, farm advisory services and adequate supply of modern inputs at fairly competitive prices would enhance the prospects of the small holder farmers. Keywords: efficient, resource allocation, optimization, linear programming, gross margin ...

  4. EFFICIENCY STUDY APPLICATION OF CIRCULAR MOLECULAR MODEL TO QSAR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Victor E. Kuz’min

    2016-04-01

    Full Text Available New method of structural descriptors generation for the solving of QSAR tasks has been developed by authors. It is shown that mentioned approach allows generation of the set of structural parameters with the quite adequate level of description of molecules and their properties. Efficiency of developed approach was shown base on ACE ingibitors.

  5. A resource allocation model to support efficient air quality ...

    African Journals Online (AJOL)

    Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz.

  6. An efficient structural reanalysis model for general design changes ...

    African Journals Online (AJOL)

    Approximate structural reanalysis methods have long been pursued in quest for efficiency so as to enhance their practical' application in the assessment and verification of designs following design modifications, These have been of significant practical importance with particular emphasis on design optimization of ...

  7. Time Use Efficiency and the Five-Factor Model of Personality.

    Science.gov (United States)

    Kelly, William E.; Johnson, Judith L.

    2005-01-01

    To investigate the relationship between self-perceived time use efficiency and the five-factor model of personality, 105 university students were administered the Time Use Efficiency Scale (TUES; Kelly, 2003) and Saucier's Big-Five Mini-Markers (Saucier, 1994). The results indicated that time use efficiency was strongly, positively related to…

  8. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  9. Efficient ECG Signal Compression Using Adaptive Heart Model

    National Research Council Canada - National Science Library

    Szilagyi, S

    2001-01-01

    This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...

  10. ESTIMATION OF EFFICIENCY OF THE COMPETITIVE COOPERATION MODEL

    Directory of Open Access Journals (Sweden)

    Natalia N. Liparteliani

    2014-01-01

    Full Text Available Competitive cooperation model of regional travel agencies and travel market participants is considered. Evaluation of the model using mathematical and statistical methods was carried out. Relationship marketing provides a travel company certain economic advantages.

  11. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  12. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  13. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    Science.gov (United States)

    Yaghoob, Naimi; Mohammad, Naimi

    2013-07-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α(1) and α(2) for SS and SR interactions, respectively. The effect of variation of α(1) and α(2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor.

  14. Facial Feature Tracking Using Efficient Particle Filter and Active Appearance Model

    Directory of Open Access Journals (Sweden)

    Durkhyun Cho

    2014-09-01

    Full Text Available For natural human-robot interaction, the location and shape of facial features in a real environment must be identified. One robust method to track facial features is by using a particle filter and the active appearance model. However, the processing speed of this method is too slow for utilization in practice. In order to improve the efficiency of the method, we propose two ideas: (1 changing the number of particles situationally, and (2 switching the prediction model depending upon the degree of the importance of each particle using a combination strategy and a clustering strategy. Experimental results show that the proposed method is about four times faster than the conventional method using a particle filter and the active appearance model, without any loss of performance.

  15. A new formulation of cannabidiol in cream shows therapeutic effects in a mouse model of experimental autoimmune encephalomyelitis.

    Science.gov (United States)

    Giacoppo, Sabrina; Galuppo, Maria; Pollastro, Federica; Grassi, Gianpaolo; Bramanti, Placido; Mazzon, Emanuela

    2015-10-21

    The present study was designed to investigate the efficacy of a new formulation of alone, purified cannabidiol (CBD) (>98 %), the main non-psychotropic cannabinoid of Cannabis sativa, as a topical treatment in an experimental model of autoimmune encephalomyelitis (EAE), the most commonly used model for multiple sclerosis (MS). Particularly, we evaluated whether administration of a topical 1 % CBD-cream, given at the time of symptomatic disease onset, could affect the EAE progression and if this treatment could also recover paralysis of hind limbs, qualifying topical-CBD for the symptomatic treatment of MS. In order to have a preparation of 1 % of CBD-cream, pure CBD have been solubilized in propylene glycoland basic dense cream O/A. EAE was induced by immunization with myelin oligodendroglial glycoprotein peptide (MOG35-55) in C57BL/6 mice. After EAE onset, mice were allocated into several experimental groups (Naïve, EAE, EAE-1 % CBD-cream, EAE-vehicle cream, CTRL-1 % CBD-cream, CTRL-vehicle cream). Mice were observed daily for signs of EAE and weight loss. At the sacrifice of the animals, which occurred at the 28(th) day from EAE-induction, spinal cord and spleen tissues were collected in order to perform histological evaluation, immunohistochemistry and western blotting analysis. Achieved results surprisingly show that daily treatment with topical 1 % CBD-cream may exert neuroprotective effects against EAE, diminishing clinical disease score (mean of 5.0 in EAE mice vs 1.5 in EAE + CBD-cream), by recovering of paralysis of hind limbs and by ameliorating histological score typical of disease (lymphocytic infiltration and demyelination) in spinal cord tissues. Also, 1 % CBD-cream is able to counteract the EAE-induced damage reducing release of CD4 and CD8α T cells (spleen tissue localization was quantified about 10,69 % and 35,96 % of positive staining respectively in EAE mice) and expression of the main pro-inflammatory cytokines as well as several other

  16. A computationally efficient electrophysiological model of human ventricular cells

    NARCIS (Netherlands)

    Bernus, O.; Wilders, R.; Zemlin, C. W.; Verschelde, H.; Panfilov, A. V.

    2002-01-01

    Recent experimental and theoretical results have stressed the importance of modeling studies of reentrant arrhythmias in cardiac tissue and at the whole heart level. We introduce a six-variable model obtained by a reformulation of the Priebe-Beuckelmann model of a single human ventricular cell. The

  17. Methodology for Measurement the Energy Efficiency Involving Solar Heating Systems Using Stochastic Modelling

    Directory of Open Access Journals (Sweden)

    Bruno G. Menita

    2017-01-01

    Full Text Available The purpose of the present study is to evaluate gains through measurement and verification methodology adapted from the International Performance Measurement and Verification Protocol, from case studies involving Energy Efficiency Projects in the Goias State, Brazil. This paper also presents the stochastic modelling for the generation of future scenarios of electricity saving resulted by these Energy Efficiency Projects. The model is developed by using the Geometric Brownian Motion Stochastic Process with Mean Reversion associated with the Monte Carlo simulation technique. Results show that the electricity saved from the replacement of electric showers by solar water heating systems in homes of low-income families has great potential to bring financial benefits to such families, and that the reduction in peak demand obtained from this Energy Efficiency Action is advantageous to the Brazilian electrical system. Results contemplate also the future scenarios of electricity saving and a sensitivity analysis in order to verify how values of some parameters influence on the results, once there is no historical data available for obtaining these values.

  18. Robust and efficient solution procedures for association models

    DEFF Research Database (Denmark)

    Michelsen, Michael Locht

    2006-01-01

    Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence ...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....

  19. An efficient method for modeling kinetic behavior of channel proteins in cardiomyocytes.

    Science.gov (United States)

    Wang, Chong; Beyerlein, Peter; Pospisil, Heike; Krause, Antje; Nugent, Chris; Dubitzky, Werner

    2012-01-01

    Characterization of the kinetic and conformational properties of channel proteins is a crucial element in the integrative study of congenital cardiac diseases. The proteins of the ion channels of cardiomyocytes represent an important family of biological components determining the physiology of the heart. Some computational studies aiming to understand the mechanisms of the ion channels of cardiomyocytes have concentrated on Markovian stochastic approaches. Mathematically, these approaches employ Chapman-Kolmogorov equations coupled with partial differential equations. As the scale and complexity of such subcellular and cellular models increases, the balance between efficiency and accuracy of algorithms becomes critical. We have developed a novel two-stage splitting algorithm to address efficiency and accuracy issues arising in such modeling and simulation scenarios. Numerical experiments were performed based on the incorporation of our newly developed conformational kinetic model for the rapid delayed rectifier potassium channel into the dynamic models of human ventricular myocytes. Our results show that the new algorithm significantly outperforms commonly adopted adaptive Runge-Kutta methods. Furthermore, our parallel simulations with coupled algorithms for multicellular cardiac tissue demonstrate a high linearity in the speedup of large-scale cardiac simulations.

  20. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    Science.gov (United States)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  1. Improved quantum efficiency models of CZTSe: GE nanolayer solar cells with a linear electric field.

    Science.gov (United States)

    Lee, Sanghyun; Price, Kent J; Saucedo, Edgardo; Giraldo, Sergio

    2018-02-08

    We fabricated and characterized CZTSe:Ge nanolayer (quantum efficiency for Ge doped CZTSe devices. The linear electric field model is developed with the incomplete gamma function of the quantum efficiency as compared to the empirical data at forward bias conditions. This model is characterized with a consistent set of parameters from a series of measurements and the literature. Using the analytical modelling method, the carrier collection profile in the absorber is calculated and closely fitted by the developed mathematical expressions to identify the carrier dynamics during the quantum efficiency measurement of the device. The analytical calculation is compared with the measured quantum efficiency data at various bias conditions.

  2. Estimation and Analysis of Spatiotemporal Dynamics of the Net Primary Productivity Integrating Efficiency Model with Process Model in Karst Area

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2017-05-01

    Full Text Available Estimates of regional net primary productivity (NPP are useful in modeling regional and global carbon cycles, especially in karst areas. This work developed a new method to study NPP characteristics and changes in Chongqing, a typical karst area. To estimate NPP accurately, the model which integrated an ecosystem process model (CEVSA with a light use efficiency model (GLOPEM called GLOPEM-CEVSA was applied. The fraction of photosynthetically active radiation (fPAR was derived from remote sensing data inversion based on moderate resolution imaging spectroradiometer atmospheric and land products. Validation analyses showed that the PAR and NPP values, which were simulated by the model, matched the observed data well. The values of other relevant NPP models, as well as the MOD17A3 NPP products (NPP MOD17, were compared. In terms of spatial distribution, NPP decreased from northeast to southwest in the Chongqing region. The annual average NPP in the study area was approximately 534 gC/m2a (Std. = 175.53 from 2001 to 2011, with obvious seasonal variation characteristics. The NPP from April to October accounted for 80.1% of the annual NPP, while that from June to August accounted for 43.2%. NPP changed with the fraction of absorbed PAR, and NPP was also significantly correlated to precipitation and temperature at monthly temporal scales, and showed stronger sensitivity to interannual variation in temperature.

  3. Efficient Modeling and Migration in Anisotropic Media Based on Prestack Exploding Reflector Model and Effective Anisotropy

    KAUST Repository

    Wang, Hui

    2014-05-01

    This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth\\'s subsurface requires modeling and migration as building blocks. The challenge comes from two aspects. First, the underlying governing equations for seismic wave propagation in anisotropic media are far more complicated than that in isotropic media which demand higher computational costs to solve. Second, the usage of whole prestack seismic data still remains a burden considering its storage volume and the existing wave equation solvers. In this thesis, I develop two approaches to tackle the challenges. In the first part, I adopt the concept of prestack exploding reflector model to handle the whole prestack data and bridge the data space directly to image space in a single kernel. I formulate the extrapolation operator in a two-way fashion to remove he restriction on directions that waves propagate. I also develop a generic method for phase velocity evaluation within anisotropic media used in this extrapolation kernel. The proposed method provides a tool for generating prestack images without wavefield cross correlations. In the second part of this thesis, I approximate the anisotropic models using effective isotropic models. The wave phenomena in these effective models match that in anisotropic models both kinematically and dynamically. I obtain the effective models through equating eikonal equations and transport equations of anisotropic and isotropic models, thereby in the high frequency asymptotic approximation sense. The wavefields extrapolation costs are thus reduced using isotropic wave equation solvers while the anisotropic effects are maintained through this approach. I benchmark the two proposed methods using synthetic datasets. Tests on anisotropic Marmousi model and anisotropic BP2007 model demonstrate the applicability of my

  4. The Spatial Mechanism and Drive Mechanism Study of Chinese Urban Efficiency - Based on the Spatial Panel Data Model

    Directory of Open Access Journals (Sweden)

    Yuan Xiaoling

    2016-08-01

    Full Text Available In this article, the urban efficiency factors of 285 Chinese prefecture-level cities in the period from 2003 to 2012 are analyzed by using the spatial econometric model. The result shows that the development of urban efficiency between the cities positively correlates with space. And we conclude that the Industrial Structure, Openness and the Infrastructure can promote the development of such urban efficiency. The Urban Agglomeration Scale, Government Control, Fixed Asset Investment and other factors can inhibit the development of urban efficiency to a certain degree. Therefore, we come to a conclusion that, in the new urbanization construction process, the cities need to achieve cross-regional coordination from the perspective of urban agglomerations and metropolitan development. The efficiency of the city together with the scientific and rational flow of the factors should also be improved.

  5. A New Efficient Hybrid Intelligent Model for Biodegradation Process of DMP with Fuzzy Wavelet Neural Networks

    Science.gov (United States)

    Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong

    2017-01-01

    A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.

  6. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  7. Tests of control in the Audit Risk Model : Effective? Efficient?

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans)

    2004-01-01

    Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes

  8. Sieve Tray Efficiency using CFD Modeling and Simulation | Gesit ...

    African Journals Online (AJOL)

    In this work, computational fluid dynamics (CFD) models are developed and used to predict sieve tray hydrodynamics and mass transfer. The models consider the three-dimensional two-phase flow of vapor (or gas) and liquid in which each phase is treated as an interpenetrating continuum having separate transport ...

  9. Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Karali, Nihan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-12-12

    The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.

  10. Mathematical modelling as basis for efficient enterprise management

    Directory of Open Access Journals (Sweden)

    Kalmykova Svetlana

    2017-01-01

    Full Text Available The choice of the most effective HR- management style at the enterprise is based on modeling various socio-economic situations. The article describes the formalization of the managing processes aimed at the interaction between the allocated management subsystems. The mathematical modelling tools are used to determine the time spent on recruiting personnel for key positions in the management hierarchy selection.

  11. System convergence in transport models: algorithms efficiency and output uncertainty

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2015-01-01

    of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...... in the MSA process. In connection to DNTM it is shown that MSA works well when applied to travel-time averaging, whereas trip averaging is generally infected by random noise resulting from the assignment model. The latter implies that the minimum uncertainty in the final model output is dictated...

  12. Degradation of organic pollutants by Vacuum-Ultraviolet (VUV): Kinetic model and efficiency.

    Science.gov (United States)

    Xie, Pengchao; Yue, Siyang; Ding, Jiaqi; Wan, Ying; Li, Xuchun; Ma, Jun; Wang, Zongping

    2018-04-15

    Vacuum-Ultraviolet (VUV), an efficient and green method to produce hydroxyl radical (•OH), is effective in degrading numerous organic contaminants in aqueous solution. Here, we proposed an effective and simple kinetic model to describe the degradation of organic pollutants in VUV system, by taking the •OH scavenging effects of formed organic intermediates as co-existing organic matter in whole. Using benzoic acid (BA) as a •OH probe, •OH was regarded vital for pollutant degradation in VUV system, and the thus developed model successfully predicted its degradation kinetics under different conditions. Effects of typical influencing factors such as BA concentrations and UV intensity were investigated quantitatively by the model. Temperature was found to be an important influencing factor in the VUV system, and the quantum yield of •OH showed a positive linear dependence on temperature. Impacts of humic acid (HA), alkalinity, chloride, and water matrices (realistic waters) on the oxidation efficiency were also examined. BA degradation was significantly inhibited by HA due to its scavenging of •OH, but was influenced much less by the alkalinity and chloride; high oxidation efficiency was still obtained in the realistic water. The degradation kinetics of three other typical micropollutants including bisphenol A (BPA), nitrobenzene (NB) and dimethyl phthalate (DMP), and the mixture of co-existing BA, BPA and DMP were further studied, and the developed model predicted the experimental data well, especially in realistic water. It is expected that this study will provide an effective approach to predict the degradation of organic micropollutants by the promising VUV system, and broaden the application of VUV system in water treatment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Efficient patient modeling for visuo-haptic VR simulation using a generic patient atlas.

    Science.gov (United States)

    Mastmeyer, Andre; Fortmeier, Dirk; Handels, Heinz

    2016-08-01

    This work presents a new time-saving virtual patient modeling system by way of example for an existing visuo-haptic training and planning virtual reality (VR) system for percutaneous transhepatic cholangio-drainage (PTCD). Our modeling process is based on a generic patient atlas to start with. It is defined by organ-specific optimized models, method modules and parameters, i.e. mainly individual segmentation masks, transfer functions to fill the gaps between the masks and intensity image data. In this contribution, we show how generic patient atlases can be generalized to new patient data. The methodology consists of patient-specific, locally-adaptive transfer functions and dedicated modeling methods such as multi-atlas segmentation, vessel filtering and spline-modeling. Our full image volume segmentation algorithm yields median DICE coefficients of 0.98, 0.93, 0.82, 0.74, 0.51 and 0.48 regarding soft-tissue, liver, bone, skin, blood and bile vessels for ten test patients and three selected reference patients. Compared to standard slice-wise manual contouring time saving is remarkable. Our segmentation process shows out efficiency and robustness for upper abdominal puncture simulation systems. This marks a significant step toward establishing patient-specific training and hands-on planning systems in a clinical environment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.

    Science.gov (United States)

    Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae

    2013-05-15

    Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.

  15. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  16. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  17. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  18. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    OpenAIRE

    Jump, David

    2014-01-01

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing...

  19. Model Checking for Energy Efficient Scheduling in Wireless Sensor Networks

    OpenAIRE

    Schmitt, Peter H.; Werner, Frank

    2006-01-01

    Networking and power management of wireless energy - conscious sensor networks is an important area of current research. We investigate a network of MicaZ sensor motes using the ZigBee protocol for communication, and provide a model using Timed Safety Automata. Our analysis focuses on estimating energy consumption by model checking in different scenarios using the Uppaal tool. Special interest is devoted to the energy use in margi...

  20. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin

    2014-05-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  1. Steam injection for heavy oil recovery: Modeling of wellbore heat efficiency and analysis of steam injection performance

    International Nuclear Information System (INIS)

    Gu, Hao; Cheng, Linsong; Huang, Shijun; Li, Bokai; Shen, Fei; Fang, Wenchao; Hu, Changhao

    2015-01-01

    Highlights: • A comprehensive mathematical model was established to estimate wellbore heat efficiency of steam injection wells. • A simplified approach of predicting steam pressure in wellbores was proposed. • High wellhead injection rate and wellhead steam quality can improve wellbore heat efficiency. • High wellbore heat efficiency does not necessarily mean good performance of heavy oil recovery. • Using excellent insulation materials is a good way to save water and fuels. - Abstract: The aims of this work are to present a comprehensive mathematical model for estimating wellbore heat efficiency and to analyze performance of steam injection for heavy oil recovery. In this paper, we firstly introduce steam injection process briefly. Secondly, a simplified approach of predicting steam pressure in wellbores is presented and a complete expression for steam quality is derived. More importantly, both direct and indirect methods are adopted to determine the wellbore heat efficiency. Then, the mathematical model is solved using an iterative technique. After the model is validated with measured field data, we study the effects of wellhead injection rate and wellhead steam quality on steam injection performance reflected in wellbores. Next, taking cyclic steam stimulation as an example, we analyze steam injection performance reflected in reservoirs with numerical reservoir simulation method. Finally, the significant role of improving wellbore heat efficiency in saving water and fuels is discussed in detail. The results indicate that we can improve the wellbore heat efficiency by enhancing wellhead injection rate or steam quality. However, high wellbore heat efficiency does not necessarily mean satisfactory steam injection performance reflected in reservoirs or good performance of heavy oil recovery. Moreover, the paper shows that using excellent insulation materials is a good way to save water and fuels due to enhancement of wellbore heat efficiency

  2. Efficient Model Order Reduction for the Dynamics of Nonlinear Multilayer Sheet Structures with Trial Vector Derivatives

    Directory of Open Access Journals (Sweden)

    Wolfgang Witteveen

    2014-01-01

    Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.

  3. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    Energy Technology Data Exchange (ETDEWEB)

    Goldhaber, Steve [National Center for Atmospheric Research, Boulder, CO (United States); Holland, Marika [National Center for Atmospheric Research, Boulder, CO (United States)

    2017-09-05

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enabling more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.

  4. Towards a more efficient and robust representation of subsurface hydrological processes in Earth System Models

    Science.gov (United States)

    Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.

    2017-12-01

    Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.

  5. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  6. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  7. An Efficient Null Model for Conformational Fluctuations in Proteins

    DEFF Research Database (Denmark)

    Harder, Tim Philipp; Borg, Mikael; Bottaro, Sandro

    2012-01-01

    Protein dynamics play a crucial role in function, catalytic activity, and pathogenesis. Consequently, there is great interest in computational methods that probe the conformational fluctuations of a protein. However, molecular dynamics simulations are computationally costly and therefore are often...... limited to comparatively short timescales. TYPHON is a probabilistic method to explore the conformational space of proteins under the guidance of a sophisticated probabilistic model of local structure and a given set of restraints that represent nonlocal interactions, such as hydrogen bonds or disulfide...... bridges. The choice of the restraints themselves is heuristic, but the resulting probabilistic model is well-defined and rigorous. Conceptually, TYPHON constitutes a null model of conformational fluctuations under a given set of restraints. We demonstrate that TYPHON can provide information...

  8. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  9. Combining climate and energy policies: synergies or antagonism? Modeling interactions with energy efficiency instruments

    International Nuclear Information System (INIS)

    Lecuyer, Oskar; Bibas, Ruben

    2012-01-01

    In addition to the already present Climate and Energy package, the European Union (EU) plans to include a binding target to reduce energy consumption. We analyze the rationales the EU invokes to justify such an overlapping and develop a minimal common framework to study interactions arising from the combination of instruments reducing emissions, promoting renewable energy (RE) production and reducing energy demand through energy efficiency (EE) investments. We find that although all instruments tend to reduce GHG emissions and although a price on carbon tends also to give the right incentives for RE and EE, the combination of more than one instrument leads to significant antagonisms regarding major objectives of the policy package. The model allows to show in a single framework and to quantify the antagonistic effects of the joint promotion of RE and EE. We also show and quantify the effects of this joint promotion on ETS permit price, on wholesale market price and on energy production levels. (authors)

  10. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  11. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling

    DEFF Research Database (Denmark)

    Albaek, Mads O.; Gernaey, Krist V.; Hansen, Morten S.

    2012-01-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression...

  12. Transport Modeling Analysis to Test the Efficiency of Fish Markets in Oman

    Directory of Open Access Journals (Sweden)

    Khamis S. Al-Abri

    2009-01-01

    Full Text Available Oman’s fish exports have shown an increasing trend while supplies to the domestic market have declined, despite increased domestic demand caused by population growth and income. This study hypothesized that declining fish supplies to domestic markets were due to inefficiency of the transport function of the fish marketing system in Oman. The hypothesis was tested by comparing the observed prices of several fish species at several markets with optimal prices. The optimal prices were estimated by the dual of a fish transport cost- minimizing linear programming model. Primary data on market prices and transportation costs and quantities transported were gathered through a survey of a sample of fish transporters. The quantity demanded at market sites was estimated using secondary data. The analysis indicated that the differences between the observed prices and the estimated optimal prices were not significantly different showing that the transport function of fish markets in Oman is efficient. This implies that the increasing trend of fish exports vis-à-vis the decreasing trend of supplies to domestic markets is rational and will continue. This may not be considered to be equitable but it is efficient and may have long-term implications for national food security and have an adverse impact on the nutritional and health status of the rural poor population. Policy makers may have to recognize the trade off between the efficiency and equity implications of the fish markets in Oman and make policy decisions accordingly in order to ensure national food security.

  13. Particle capture efficiency in a multi-wire model for high gradient magnetic separation

    KAUST Repository

    Eisenträger, Almut

    2014-07-21

    High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle\\'s entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separation cycles may increase efficiency. © 2014 AIP Publishing LLC.

  14. Efficient Lattice-Based Signcryption in Standard Model

    Directory of Open Access Journals (Sweden)

    Jianhua Yan

    2013-01-01

    Full Text Available Signcryption is a cryptographic primitive that can perform digital signature and public encryption simultaneously at a significantly reduced cost. This advantage makes it highly useful in many applications. However, most existing signcryption schemes are seriously challenged by the booming of quantum computations. As an interesting stepping stone in the post-quantum cryptographic community, two lattice-based signcryption schemes were proposed recently. But both of them were merely proved to be secure in the random oracle models. Therefore, the main contribution of this paper is to propose a new lattice-based signcryption scheme that can be proved to be secure in the standard model.

  15. An integrated proteomics approach shows synaptic plasticity changes in an APP/PS1 Alzheimer's mouse model

    DEFF Research Database (Denmark)

    Kempf, Stefan J; Metaxas, Athanasios; Ibáñez-Vea, María

    2016-01-01

    The aim of this study was to elucidate the molecular signature of Alzheimer's disease-associated amyloid pathology.We used the double APPswe/PS1ΔE9 mouse, a widely used model of cerebral amyloidosis, to compare changes in proteome, including global phosphorylation and sialylated N-linked glycosyl...

  16. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  17. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  18. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  19. Efficient use of the velocity gradients tensor in flow modelling

    NARCIS (Netherlands)

    Passchier, C.W.

    1987-01-01

    For models of fabric development in rocks, with vorticity as a variable parameter, the choice of an unsuitable reference frame for instantaneous flow can hamper clear presentation of results. The orientation of most fabric elements which develop in deforming rocks is attached to some principal

  20. Efficiency of Motivation Development Models for Hygienic Skills

    Directory of Open Access Journals (Sweden)

    Alexander V. Tscymbalystov

    2017-09-01

    Full Text Available The combined influence of a family and a state plays an important role in the development of an individual. This study is aimed at the model effectiveness evaluation concerning the development of oral hygiene skills among children living in families (n = 218 and being under the care of a state (n = 229. The groups were created among the children who took part in the study: the preschoolers of 5-7 years, schoolchildren of 8-11 years and adolescents of 12-15 years. During the initial examination, the hygienic status of the oral cavity before and after tooth brushing was evaluated. After that, subgroups were formed in each age group according to three models of hygienic skills training: 1 computer presentation lesson; 2 one of the students acted as a demonstrator of the skill; 3 an individual training by a hygienist. During the next 48 hours children did not take hygienic measures. Then the children were invited for a control session to demonstrate the acquired skills of oral care and evaluate the effectiveness of a model developing the skills of individual oral hygiene. During the control examination, the hygienic status was determined before and after the tooth cleaning, which allowed to determine the regimes of hygienic measure performance for children with different social status and the effectiveness of hygiene training models.

  1. An efficient visual saliency detection model based on Ripplet transform

    Indian Academy of Sciences (India)

    A Diana Andrushia

    Many of these computational models find application in adaptive image compression, segmentation of object-of-interest in an image, automatic image thumb- nailing, object detection and recognition, visual tracking, automatic creation of image collage, content-aware image resizing, non-photo realistic rendering, etc. [2].

  2. Efficient Proof Engines for Bounded Model Checking of Hybrid Systems

    DEFF Research Database (Denmark)

    Fränzle, Martin; Herde, Christian

    2005-01-01

    In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...

  3. Practice What You Preach: Microfinance Business Models and Operational Efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    The microfinance sector has room for pure for-profit microfinance institutions (MFIs), non-profit organizations, and “social” for-profit firms that aim to pursue a double bottom line. Depending on their business model, these institutions target different types of borrowers, change the size of their

  4. Practice what you preach: Microfinance business models and operational efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    2013-01-01

    The microfinance sector is an example of a sector in which firms with different business models coexist. Next to pure for-profit microfinance institutions (MFIs), the sector has room for non-profit organizations, and includes 'social' for-profit firms that aim to maximize a double bot- tom line and

  5. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  6. Islamic vs. conventional banks : Business models, efficiency and stability

    NARCIS (Netherlands)

    Beck, T.H.L.; Demirgüc-Kunt, A.; Merrouche, O.

    2013-01-01

    How different are Islamic banks from conventional banks? Does the recent crisis justify a closer look at the Sharia-compliant business model for banking? When comparing conventional and Islamic banks, controlling for time-variant country-fixed effects, we find few significant differences in business

  7. Efficient two-dimensional magnetotellurics modelling using implicitly ...

    Indian Academy of Sciences (India)

    integral equation methods, we have opted for the. Keywords. Finite difference; eigenmode method; multi-frequency approach. J. Earth Syst. Sci. 120, No. 4, August 2011, pp. 595–604. cO Indian Academy of Sciences. 595 ... (1984) used Singular Value Decomposition (SVD) for 2D forward modelling. However, the versatile.

  8. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  9. 14 Days of supplementation with blueberry extract shows anti-atherogenic properties and improves oxidative parameters in hypercholesterolemic rats model.

    Science.gov (United States)

    Ströher, Deise Jaqueline; Escobar Piccoli, Jacqueline da Costa; Güllich, Angélica Aparecida da Costa; Pilar, Bruna Cocco; Coelho, Ritiéle Pinto; Bruno, Jamila Benvegnú; Faoro, Debora; Manfredini, Vanusa

    2015-01-01

    The effects of supplementation with blueberry (BE) extract (Vaccinium ashei Reade) for 14 consecutive days on biochemical, hematological, histopathological and oxidative parameters in hypercholesterolemic rats were investigated. After supplementation with lyophilized extract of BE, the levels of total cholesterol, low-density lipoprotein cholesterol and triglycerides were decreased. Histopathological analysis showed significant decrease (p < 0.05) of aortic lesions in hypercholesterolemic rats. Oxidative parameters showed significant reductions (p < 0.05) in oxidative damage to lipids and proteins and an increase in activities of antioxidant enzymes such as catalase, superoxide dismutase and glutathione peroxidase. The BE extract showed an important cardioprotective effect by the improvements in the serum lipid profile, antioxidant system, particularly in reducing oxidative stress associated with hypercholesterolemia and anti-atherogenic effect in rats.

  10. Resource competition model predicts zonation and increasing nutrient use efficiency along a wetland salinity gradient

    Science.gov (United States)

    Schoolmaster, Donald; Stagg, Camille L.

    2018-01-01

    A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.

  11. Advanced imaging techniques show progressive arthropathy following experimentally induced knee bleeding in a factor VIII-/- rat model

    DEFF Research Database (Denmark)

    Sorensen, K. R.; Roepstorff, K.; Petersen, M.

    2015-01-01

    Background: Joint pathology is most commonly assessed by radiogra-phy, but ultrasonography (US) is increasingly recognized for its acces-sibility, safety and ability to show soft tissue changes, the earliestindicators of haemophilic arthropathy (HA). US, however, lacks theability to visualize...

  12. Automated home cage assessment shows behavioral changes in a transgenic mouse model of spinocerebellar ataxia type 17.

    Science.gov (United States)

    Portal, Esteban; Riess, Olaf; Nguyen, Huu Phuc

    2013-08-01

    Spinocerebellar Ataxia type 17 (SCA17) is an autosomal dominantly inherited, neurodegenerative disease characterized by ataxia, involuntary movements, and dementia. A novel SCA17 mouse model having a 71 polyglutamine repeat expansion in the TATA-binding protein (TBP) has shown age related motor deficit using a classic motor test, yet concomitant weight increase might be a confounding factor for this measurement. In this study we used an automated home cage system to test several motor readouts for this same model to confirm pathological behavior results and evaluate benefits of automated home cage in behavior phenotyping. Our results confirm motor deficits in the Tbp/Q71 mice and present previously unrecognized behavioral characteristics obtained from the automated home cage, indicating its use for high-throughput screening and testing, e.g. of therapeutic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Management Model for efficient quality control in new buildings

    Directory of Open Access Journals (Sweden)

    C. E. Rodríguez-Jiménez

    2017-09-01

    Full Text Available The management of the quality control of each building process is usually set up in Spain from different levels of demand. This work tries to obtain a model of reference, to compare the quality control of the building process of a specific product (building, and to be able to evaluate its warranty level. In the quest of this purpose, we take credit of specialized sources and 153 real cases of Quality Control were carefully revised using a multi-judgment method. Applying different techniques to get a specific valuation (impartial of the input parameters through Delphi’s method (17 experts query, whose matrix treatment with the Fuzzy-QFD tool condenses numerical references through a weighted distribution of the selected functions and their corresponding conditioning factors. The model thus obtained (M153 is useful in order to have a quality control reference to meet the expectations of the quality.

  14. BO-1055, a novel DNA cross-linking agent with remarkable low myelotoxicity shows potent activity in sarcoma models

    OpenAIRE

    Ambati, Srikanth R.; Shieh, Jae-Hung; Pera, Benet; Lopes, Eloisi Caldas; Chaudhry, Anisha; Wong, Elissa W.P.; Saxena, Ashish; Su, Tsann-Long; Moore, Malcolm A.S.

    2016-01-01

    DNA damaging agents cause rapid shrinkage of tumors and form the basis of chemotherapy for sarcomas despite significant toxicities. Drugs having superior efficacy and wider therapeutic windows are needed to improve patient outcomes. We used cell proliferation and apoptosis assays in sarcoma cell lines and benign cells; ?-H2AX expression, comet assay, immunoblot analyses and drug combination studies in vitro and in patient derived xenograft (PDX) models. BO-1055 caused apoptosis and cell death...

  15. Betting on change: Tenet deal with Vanguard shows it's primed to try ACO effort, new payment model.

    Science.gov (United States)

    Kutscher, Beth

    2013-07-01

    Tenet Healthcare Corp.'s acquisition of Vanguard Health Systems is a sign the investor-owned chain is willing to take a chance on alternative payment models such as accountable care organizations. There's no certainty that ACOs will deliver the improvements on quality or cost savings, but Vanguard Vice Chairman Keith Pitts, left, says his system's Pioneer ACO in Detroit has already achieved some cost savings.

  16. Restless led syndrome model Drosophila melanogaster show successful olfactory learning and 1-day retention of the acquired memory

    OpenAIRE

    Mika F. Asaba; Adrian A. Bates; Hoa M. Dao; Mika J. Maeda

    2013-01-01

    Restless Legs Syndrome (RLS) is a prevalent but poorly understood disorder that ischaracterized by uncontrollable movements during sleep, resulting in sleep disturbance.Olfactory memory in Drosophila melanogaster has proven to be a useful tool for the study ofcognitive deficits caused by sleep disturbances, such as those seen in RLS. A recently generatedDrosophila model of RLS exhibited disturbed sleep patterns similar to those seen in humans withRLS. This research seeks to improve understand...

  17. Efficient Beam-Type Structural Modeling of Rotor Blades

    OpenAIRE

    Couturier, Philippe; Krenk, Steen

    2015-01-01

    The present paper presents two recently developed numerical formulations which enable accurate representation of the static and dynamic behaviour of wind turbine rotor blades using little modeling and computational effort. The first development consists of an intuitive method to extract fully coupled six by six cross-section stiffness matrices with limited meshing effort. Secondly, an equilibrium based beam element accepting directly the stiffness matrices and accounting for large variations ...

  18. Integration efficiency for model reduction in micro-mechanical analyses

    Science.gov (United States)

    van Tuijl, Rody A.; Remmers, Joris J. C.; Geers, Marc G. D.

    2017-11-01

    Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.

  19. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  20. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  1. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  2. Assessing Green Development Efficiency of Municipalities and Provinces in China Integrating Models of Super-Efficiency DEA and Malmquist Index

    Directory of Open Access Journals (Sweden)

    Qing Yang

    2015-04-01

    Full Text Available In order to realize economic and social green development, to pave a pathway towards China’s green regional development and develop effective scientific policy to assist in building green cities and countries, it is necessary to put forward a relatively accurate, scientific and concise green assessment method. The research uses the CCR (A. Charnes & W. W. Cooper & E. Rhodes Data Envelopment Analysis (DEA model to obtain the green development frontier surface based on 31 regions’ annual cross-section data from 2008–2012. Furthermore, in order to classify the regions whereby assessment values equal to 1 in the CCR model, we chose the Super-Efficiency DEA model for further sorting. Meanwhile, according to the five-year panel data, the green development efficiency changes of 31 regions can be manifested by the Malmquist index. Finally, the study assesses the reasons for regional differences; while analyzing and discussing the results may allude to a superior green development pathway for China.

  3. Study on the Technical Efficiency of Creative Human Capital in China by Three-Stage Data Envelopment Analysis Model

    Directory of Open Access Journals (Sweden)

    Jian Ma

    2014-01-01

    Full Text Available Previous researches have proved the positive effect of creative human capital and its development on the development of economy. Yet, the technical efficiency of creative human capital and its effects are still under research. The authors are trying to estimate the technical efficiency value in Chinese context, which is adjusted by the environmental variables and statistical noises, by establishing a three-stage data envelopment analysis model, using data from 2003 to 2010. The research results indicate that, in this period, the entirety of creative human capital in China and the technical efficiency value in different regions and different provinces is still in the low level and could be promoted. Otherwise, technical non-efficiency is mostly derived from the scale nonefficiency and rarely affected by pure technical efficiency. The research also examines environmental variables’ marked effects on the technical efficiency, and it shows that different environmental variables differ in the aspect of their own effects. The expansion of the scale of education, development of healthy environment, growth of GDP, development of skill training, and population migration could reduce the input of creative human capital and promote the technical efficiency, while development of trade and institutional change, on the contrary, would block the input of creative human capital and the promotion the technical efficiency.

  4. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  5. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  6. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    Science.gov (United States)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  7. CP-809,101, a selective 5-HT2C agonist, shows activity in animal models of antipsychotic activity.

    Science.gov (United States)

    Siuciak, Judith A; Chapin, Douglas S; McCarthy, Sheryl A; Guanowsky, Victor; Brown, Janice; Chiang, Phoebe; Marala, Ravi; Patterson, Terrell; Seymour, Patricia A; Swick, Andrew; Iredale, Philip A

    2007-02-01

    CP-809,101 is a potent, functionally selective 5-HT(2C) agonist that displays approximately 100% efficacy in vitro. The aim of the present studies was to assess the efficacy of a selective 5-HT(2C) agonist in animal models predictive of antipsychotic-like efficacy and side-effect liability. Similar to currently available antipsychotic drugs, CP-809,101 dose-dependently inhibited conditioned avoidance responding (CAR, ED(50)=4.8 mg/kg, sc). The efficacy of CP-809,101 in CAR was completely antagonized by the concurrent administration of the 5-HT(2C) receptor antagonist, SB-224,282. CP-809,101 antagonized both PCP- and d-amphetamine-induced hyperactivity with ED(50) values of 2.4 and 2.9 mg/kg (sc), respectively and also reversed an apomorphine induced-deficit in prepulse inhibition. At doses up to 56 mg/kg, CP-809,101 did not produce catalepsy. Thus, the present results demonstrate that the 5-HT(2C) agonist, CP-809,101, has a pharmacological profile similar to that of the atypical antipsychotics with low extrapyramidal symptom liability. CP-809,101 was inactive in two animal models of antidepressant-like activity, the forced swim test and learned helplessness. However, CP-809,101 was active in novel object recognition, an animal model of cognitive function. These data suggest that 5-HT(2C) agonists may be a novel approach in the treatment of psychosis as well as for the improvement of cognitive dysfunction associated with schizophrenia.

  8. Energy efficiency and integrated resource planning - lessons drawn from the Californian model

    International Nuclear Information System (INIS)

    Baudry, P.

    2008-01-01

    The principle of integrated resource planning (IRP) is to consider, on the same level, investments which aim to produce energy and those which enable energy requirements to be reduced. According to this principle, the energy efficiency programmes, which help to reduce energy demand and CO 2 emissions, are considered as an economically appreciated resource. The costs and gains of this resource are evaluated and compared to those relating to energy production. California has adopted an IRP since 1990 and ranks energy efficiency highest among the available energy resources, since economic evaluations show that the cost of realizing a saving of one kWh is lower than that which corresponds to its production. Yet this energy policy model is not universally widespread over the world. This can be explained by several reasons. Firstly, a reliable economic appreciation of energy savings presupposes that great uncertainties will be raised linked to the measurement of energy savings, which emanates in articular from the different possible options for the choice of base reference. This disinterest for IRP in Europe can also be explained by an institutional context of energy market liberalization which does not promote this type of regulation, as well as by the concern of making energy supply security the policies' top priority. Lastly, the remuneration of economic players investing in the energy efficiency programmes is an indispensable condition for its quantitative recognition in national investment planning. In France, the process of multi-annual investment programming is a mechanism which could lead to energy efficiency being included as a resource with economically appreciated investments. (author)

  9. Efficient Beam-Type Structural Modeling of Rotor Blades

    DEFF Research Database (Denmark)

    Couturier, Philippe; Krenk, Steen

    2015-01-01

    The present paper presents two recently developed numerical formulations which enable accurate representation of the static and dynamic behaviour of wind turbine rotor blades using little modeling and computational effort. The first development consists of an intuitive method to extract fully...... coupled six by six cross-section stiffness matrices with limited meshing effort. Secondly, an equilibrium based beam element accepting directly the stiffness matrices and accounting for large variations in geometry and material along the blade is presented. The novel design tools are illustrated...

  10. Model-based efficiency evaluation of combine harvester traction drives

    Directory of Open Access Journals (Sweden)

    Steffen Häberle

    2015-08-01

    Full Text Available As part of the research the drive train of the combine harvesters is investigated in detail. The focus on load and power distribution, energy consumption and usage distribution are explicitly explored on two test machines. Based on the lessons learned during field operations, model-based studies of energy saving potential in the traction train of combine harvesters can now be quantified. Beyond that the virtual machine trial provides an opportunity to compare innovative drivetrain architectures and control solutions under reproducible conditions. As a result, an evaluation method is presented and generically used to draw comparisons under local representative operating conditions.

  11. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  12. Measurement and decomposition of energy efficiency of Northeast China-based on super efficiency DEA model and Malmquist index.

    Science.gov (United States)

    Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo

    2017-08-01

    Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.

  13. Linear mixed models for replication data to efficiently allow for covariate measurement error.

    Science.gov (United States)

    Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris

    2009-11-10

    It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.

  14. Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%

    KAUST Repository

    Beiley, Zach M.

    2012-01-01

    It is estimated that for photovoltaics to reach grid parity around the planet, they must be made with costs under $0.50 per W p and must also achieve power conversion efficiencies above 20% in order to keep installation costs down. In this work we explore a novel solar cell architecture, a hybrid tandem photovoltaic (HTPV), and show that it is capable of meeting these targets. HTPV is composed of an inexpensive and low temperature processed solar cell, such as an organic or dye-sensitized solar cell, that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost of the modules by ∼15% to 20% and the cost of installation by up to 30%. This suggests that HTPV is a promising option for producing solar power that matches the cost of existing grid energy. © 2012 The Royal Society of Chemistry.

  15. Pipeline for Efficient Mapping of Transcription Factor Binding Sites and Comparison of Their Models

    KAUST Repository

    Ba alawi, Wail

    2011-06-01

    The control of genes in every living organism is based on activities of transcription factor (TF) proteins. These TFs interact with DNA by binding to the TF binding sites (TFBSs) and in that way create conditions for the genes to activate. Of the approximately 1500 TFs in human, TFBSs are experimentally derived only for less than 300 TFs and only in generally limited portions of the genome. To be able to associate TF to genes they control we need to know if TFs will have a potential to interact with the control region of the gene. For this we need to have models of TFBS families. The existing models are not sufficiently accurate or they are too complex for use by ordinary biologists. To remove some of the deficiencies of these models, in this study we developed a pipeline through which we achieved the following: 1. Through a comparison analysis of the performance we identified the best models with optimized thresholds among the four different types of models of TFBS families. 2. Using the best models we mapped TFBSs to the human genome in an efficient way. The study shows that a new scoring function used with TFBS models based on the position weight matrix of dinucleotides with remote dependency results in better accuracy than the other three types of the TFBS models. The speed of mapping has been improved by developing a parallelized code and shows a significant speed up of 4x when going from 1 CPU to 8 CPUs. To verify if the predicted TFBSs are more accurate than what can be expected with the conventional models, we identified the most frequent pairs of TFBSs (for TFs E4F1 and ATF6) that appeared close to each other (within the distance of 200 nucleotides) over the human genome. We show unexpectedly that the genes that are most close to the multiple pairs of E4F1/ATF6 binding sites have a co-expression of over 90%. This indirectly supports our hypothesis that the TFBS models we use are more accurate and also suggests that the E4F1/ATF6 pair is exerting the

  16. 78 FR 79579 - Energy Conservation Program: Alternative Efficiency Determination Methods, Basic Model Definition...

    Science.gov (United States)

    2013-12-31

    ... DEPARTMENT OF ENERGY 10 CFR Parts 429 and 431 [Docket No. EERE-2011-BT-TP-0024] RIN 1904-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods, Basic Model Definition, and Compliance for Commercial HVAC, Refrigeration, and WH Equipment AGENCY: Office of Energy Efficiency and...

  17. The efficiency of OLS estimator in the linear-regression model with ...

    African Journals Online (AJOL)

    Bounds for the efficiency of ordinary least squares estimator relative to generalized least squares estimator in the linear regression model with first-order spatial error process are given. SINET: Ethiopian Journal of Science Vol. 24, No. 1 (June 2001), pp. 17-33. Key words/phrases: Efficiency, generalized least squares, ...

  18. Efficient speaker verification using Gaussian mixture model component clustering.

    Energy Technology Data Exchange (ETDEWEB)

    De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.

    2012-04-01

    In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we

  19. Modeling efficiency and water balance in PEM fuel cell systems with liquid fuel processing and hydrogen membranes

    Science.gov (United States)

    Pearlman, Joshua B.; Bhargav, Atul; Shields, Eric B.; Jackson, Gregory S.; Hearn, Patrick L.

    Integrating PEM fuel cells effectively with liquid hydrocarbon reforming requires careful system analysis to assess trade-offs associated with H 2 production, purification, and overall water balance. To this end, a model of a PEM fuel cell system integrated with an autothermal reformer for liquid hydrocarbon fuels (modeled as C 12H 23) and with H 2 purification in a water-gas-shift/membrane reactor is developed to do iterative calculations for mass, species, and energy balances at a component and system level. The model evaluates system efficiency with parasitic loads (from compressors, pumps, and cooling fans), system water balance, and component operating temperatures/pressures. Model results for a 5-kW fuel cell generator show that with state-of-the-art PEM fuel cell polarization curves, thermal efficiencies >30% can be achieved when power densities are low enough for operating voltages >0.72 V per cell. Efficiency can be increased by operating the reformer at steam-to-carbon ratios as high as constraints related to stable reactor temperatures allow. Decreasing ambient temperature improves system water balance and increases efficiency through parasitic load reduction. The baseline configuration studied herein sustained water balance for ambient temperatures ≤35 °C at full power and ≤44 °C at half power with efficiencies approaching ∼27 and ∼30%, respectively.

  20. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  1. Modeling of efficient solid-state cooler on layered multiferroics.

    Science.gov (United States)

    Starkov, Ivan; Starkov, Alexander

    2014-08-01

    We have developed theoretical foundations for the design and optimization of a solid-state cooler working through caloric and multicaloric effects. This approach is based on the careful consideration of the thermodynamics of a layered multiferroic system. The main section of the paper is devoted to the derivation and solution of the heat conduction equation for multiferroic materials. On the basis of the obtained results, we have performed the evaluation of the temperature distribution in the refrigerator under periodic external fields. A few practical examples are considered to illustrate the model. It is demonstrated that a 40-mm structure made of 20 ferroic layers is able to create a temperature difference of 25K. The presented work tries to address the whole hierarchy of physical phenomena to capture all of the essential aspects of solid-state cooling.

  2. Pridopidine, a dopamine stabilizer, improves motor performance and shows neuroprotective effects in Huntington disease R6/2 mouse model.

    Science.gov (United States)

    Squitieri, Ferdinando; Di Pardo, Alba; Favellato, Mariagrazia; Amico, Enrico; Maglione, Vittorio; Frati, Luigi

    2015-11-01

    Huntington disease (HD) is a neurodegenerative disorder for which new treatments are urgently needed. Pridopidine is a new dopaminergic stabilizer, recently developed for the treatment of motor symptoms associated with HD. The therapeutic effect of pridopidine in patients with HD has been determined in two double-blind randomized clinical trials, however, whether pridopidine exerts neuroprotection remains to be addressed. The main goal of this study was to define the potential neuroprotective effect of pridopidine, in HD in vivo and in vitro models, thus providing evidence that might support a potential disease-modifying action of the drug and possibly clarifying other aspects of pridopidine mode-of-action. Our data corroborated the hypothesis of neuroprotective action of pridopidine in HD experimental models. Administration of pridopidine protected cells from apoptosis, and resulted in highly improved motor performance in R6/2 mice. The anti-apoptotic effect observed in the in vitro system highlighted neuroprotective properties of the drug, and advanced the idea of sigma-1-receptor as an additional molecular target implicated in the mechanism of action of pridopidine. Coherent with protective effects, pridopidine-mediated beneficial effects in R6/2 mice were associated with an increased expression of pro-survival and neurostimulatory molecules, such as brain derived neurotrophic factor and DARPP32, and with a reduction in the size of mHtt aggregates in striatal tissues. Taken together, these findings support the theory of pridopidine as molecule with disease-modifying properties in HD and advance the idea of a valuable therapeutic strategy for effectively treating the disease. © 2015 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  3. Efficient time-domain model of the graphene dielectric function

    Science.gov (United States)

    Prokopeva, Ludmila J.; Kildishev, Alexander V.

    2013-09-01

    A honey-comb monolayer lattice of carbon atoms, graphene, is not only ultra-thin, ultra-light, flexible and strong, but also highly conductive when doped and exhibits strong interaction with electromagnetic radiation in the spectral range from microwaves to the ultraviolet. Moreover, this interaction can be effectively controlled electrically. High flexibility and conductivity makes graphene an attractive material for numerous photonic applications requiring transparent conducting electrodes: touchscreens, liquid crystal displays, organic photovoltaic cells, and organic light-emitting diodes. Meanwhile, its tunability makes it desirable for optical modulators, tunable filters and polarizers. This paper deals with the basics of the time-domain modeling of the graphene dielectric function under a random-phase approximation. We focus at applicability of Padé approximants to the interband dielectric function (IDF) of single layer graphene. Our study is centered on the development of a two-critical points approximation (2CPA) of the IDF within a single-electron framework with negligible carrier scattering and a realistic range of chemical potential at room temperature. This development is successfully validated by comparing reflection and transmission spectra computed by a numerical method in time-domain versus semi-analytical calculations in frequency domain. Finally, we sum up our results - (1) high-quality approximation, (2) tunability, and (3) second-order accurate numerical FDTD implementation of the 2CPA of IDF demonstrated across the desired range of the chemical potential to temperature ratios (4 - 23). Finally, we put forward future directions for time-domain modeling of optical response of graphene with wide range of tunable and fabrication-dependent parameters, including other broadening factors and variations of temperature and chemical potentials.

  4. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    Science.gov (United States)

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A Flexible, Efficient Binomial Mixed Model for Identifying Differential DNA Methylation in Bisulfite Sequencing Data

    Science.gov (United States)

    Lea, Amanda J.

    2015-01-01

    Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596

  6. Zonulin transgenic mice show altered gut permeability and increased morbidity/mortality in the DSS colitis model.

    Science.gov (United States)

    Sturgeon, Craig; Lan, Jinggang; Fasano, Alessio

    2017-06-01

    Increased small intestinal permeability (IP) has been proposed to be an integral element, along with genetic makeup and environmental triggers, in the pathogenies of chronic inflammatory diseases (CIDs). We identified zonulin as a master regular of intercellular tight junctions linked to the development of several CIDs. We aim to study the role of zonulin-mediated IP in the pathogenesis of CIDs. Zonulin transgenic Hp2 mice (Ztm) were subjected to dextran sodium sulfate (DSS) treatment for 7 days, followed by 4-7 days' recovery and compared to C57Bl/6 (wild-type (WT)) mice. IP was measured in vivo and ex vivo, and weight, histology, and survival were monitored. To mechanistically link zonulin-dependent impairment of small intestinal barrier function with clinical outcome, Ztm were treated with the zonulin inhibitor AT1001 added to drinking water in addition to DSS. We observed increased morbidity (more pronounced weight loss and colitis) and mortality (40-70% compared with 0% in WT) at 11 days post-DSS treatment in Ztm compared with WT mice. Both in vivo and ex vivo measurements showed an increased IP at baseline in Ztm compared to WT mice, which was exacerbated by DSS treatment and was associated with upregulation of zonulin gene expression (fourfold in the duodenum, sixfold in the jejunum). Treatment with AT1001 prevented the DSS-induced increased IP both in vivo and ex vivo without changing zonulin gene expression and completely reverted morbidity and mortality in Ztm. Our data show that zonulin-dependent small intestinal barrier impairment is an early step leading to the break of tolerance with subsequent development of CIDs. © 2017 New York Academy of Sciences.

  7. Research on CO2 ejector component efficiencies by experiment measurement and distributed-parameter modeling

    International Nuclear Information System (INIS)

    Zheng, Lixing; Deng, Jianqiang

    2017-01-01

    Highlights: • The ejector distributed-parameter model is developed to study ejector efficiencies. • Feasible component and total efficiency correlations of ejector are established. • New efficiency correlations are applied to obtain dynamic characteristics of EERC. • More suitable fixed efficiency value can be determined by the proposed correlations. - Abstract: In this study we combine the experimental measurement data and the theoretical model of ejector to determine CO 2 ejector component efficiencies including the motive nozzle, suction chamber, mixing section, diffuser as well as the total ejector efficiency. The ejector is modeled utilizing the distributed-parameter method, and the flow passage is divided into a number of elements and the governing equations are formulated based on the differential equation of mass, momentum and energy conservation. The efficiencies of ejector are investigated under different ejector geometric parameters and operational conditions, and the corresponding empirical correlations are established. Moreover, the correlations are incorporated into a transient model of transcritical CO 2 ejector expansion refrigeration cycle (EERC) and the dynamic simulations is performed based on variable component efficiencies and fixed values. The motive nozzle, suction chamber, mixing section and diffuser efficiencies vary from 0.74 to 0.89, 0.86 to 0.96, 0.73 to 0.9 and 0.75 to 0.95 under the studied conditions, respectively. The response diversities of suction flow pressure and discharge pressure are obvious between the variable efficiencies and fixed efficiencies referring to the previous studies, while when the fixed value is determined by the presented correlations, their response differences are basically the same.

  8. Model Orlando regionally efficient travel management coordination center (MORE TMCC), phase II : final report.

    Science.gov (United States)

    2012-09-01

    The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...

  9. Evaluating transit operator efficiency: An enhanced DEA model with constrained fuzzy-AHP cones

    OpenAIRE

    Xin Li; Yue Liu; Yaojun Wang; Zhigang Gao

    2016-01-01

    This study addresses efforts to comb the Analytic Hierarchy Process (AHP) with Data Envelopment Analysis (DEA) to deliver a robust enhanced DEA model for transit operator efficiency assessment. The proposed model is designed to better capture inherent preferences information over input and output indicators by adding constraint cones to the conventional DEA model. A revised fuzzy-AHP model is employed to generate cones, where the proposed model features the integration of the fuzzy logic with...

  10. LEADERSHIP MODELS AND EFFICIENCY IN DECISION CRISIS SITUATIONS, DURING DISASTERS

    Directory of Open Access Journals (Sweden)

    JAIME RIQUELME CASTAÑEDA

    2017-09-01

    Full Text Available This article explains how an effective leadership is made on a team during an emergency, during a decision crisis in the context of a disaster. From the approach of the process, we analyze some variables such as flexibility, value congruence, rationality, politicization, and quality of design. To achieve that, we made a fi eld work with the information obtained from the three Emergency headquarters deployed by the Chilean Armed Forces, due to the effects of the 8.8 earthquake on February 27th 2010. The data is analyzed through econometric technics. The results suggested that the original ideas and the rigorous analysis are the keys to secure the quality of the decision. It also, made possible to unveil the fact, that to have efficiency in operations in a disaster, it requires a big presence of a vision, mission, and inspiration about a solid and pre-existing base of goals and motivations. Finally, we can fi nd the support to the relationship between kinds of leadership and efficiency on crisis decision-making process of the disaster and opens a space to build a decision making theoretic model.

  11. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems.

    Science.gov (United States)

    Cilfone, Nicholas A; Kirschner, Denise E; Linderman, Jennifer J

    2015-03-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level.

  12. Numerical modeling of positive streamer in air in nonuniform fields: Efficiency of radicals production

    International Nuclear Information System (INIS)

    Kulikovsky, A.A.

    2001-01-01

    The efficiency of streamer corona depends on a number of factors such as geometry of electrodes, voltage pulse parameters, gas pressure etc. In a past 5 years a two-dimensional models of streamer in nonuniform fields in air have been developed. These models allow to simulate streamer dynamics and generation of species and to investigate the influence of external parameters on species production. In this work the influence of Laplacian field on efficiency of radicals generation is investigated

  13. Four shells atomic model to computer the counting efficiency of electron-capture nuclides

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Fernandez Martinez, A.

    1985-01-01

    The present paper develops a four-shells atomic model in order to obtain the efficiency of detection in liquid scintillation courting, Mathematical expressions are given to calculate the probabilities of the 229 different atomic rearrangements so as the corresponding effective energies. This new model will permit the study of the influence of the different parameters upon the counting efficiency for nuclides of high atomic number. (Author) 7 refs

  14. Network models of TEM β-lactamase mutations coevolving under antibiotic selection show modular structure and anticipate evolutionary trajectories.

    Science.gov (United States)

    Guthrie, Violeta Beleva; Allen, Jennifer; Camps, Manel; Karchin, Rachel

    2011-09-01

    Understanding how novel functions evolve (genetic adaptation) is a critical goal of evolutionary biology. Among asexual organisms, genetic adaptation involves multiple mutations that frequently interact in a non-linear fashion (epistasis). Non-linear interactions pose a formidable challenge for the computational prediction of mutation effects. Here we use the recent evolution of β-lactamase under antibiotic selection as a model for genetic adaptation. We build a network of coevolving residues (possible functional interactions), in which nodes are mutant residue positions and links represent two positions found mutated together in the same sequence. Most often these pairs occur in the setting of more complex mutants. Focusing on extended-spectrum resistant sequences, we use network-theoretical tools to identify triple mutant trajectories of likely special significance for adaptation. We extrapolate evolutionary paths (n = 3) that increase resistance and that are longer than the units used to build the network (n = 2). These paths consist of a limited number of residue positions and are enriched for known triple mutant combinations that increase cefotaxime resistance. We find that the pairs of residues used to build the network frequently decrease resistance compared to their corresponding singlets. This is a surprising result, given that their coevolution suggests a selective advantage. Thus, β-lactamase adaptation is highly epistatic. Our method can identify triplets that increase resistance despite the underlying rugged fitness landscape and has the unique ability to make predictions by placing each mutant residue position in its functional context. Our approach requires only sequence information, sufficient genetic diversity, and discrete selective pressures. Thus, it can be used to analyze recent evolutionary events, where coevolution analysis methods that use phylogeny or statistical coupling are not possible. Improving our ability to assess

  15. Amniotic fluid stem cells with low γ-interferon response showed behavioral improvement in Parkinsonism rat model.

    Directory of Open Access Journals (Sweden)

    Yu-Jen Chang

    Full Text Available Amniotic fluid stem cells (AFSCs are multipotent stem cells that may be used in transplantation medicine. In this study, AFSCs established from amniocentesis were characterized on the basis of surface marker expression and differentiation potential. To further investigate the properties of AFSCs for translational applications, we examined the cell surface expression of human leukocyte antigens (HLA of these cells and estimated the therapeutic effect of AFSCs in parkinsonian rats. The expression profiles of HLA-II and transcription factors were compared between AFSCs and bone marrow-derived mesenchymal stem cells (BMMSCs following treatment with γ-IFN. We found that stimulation of AFSCs with γ-IFN prompted only a slight increase in the expression of HLA-Ia and HLA-E, and the rare HLA-II expression could also be observed in most AFSCs samples. Consequently, the expression of CIITA and RFX5 was weakly induced by γ-IFN stimulation of AFSCs compared to that of BMMSCs. In the transplantation test, Sprague Dawley rats with 6-hydroxydopamine lesioning of the substantia nigra were used as a parkinsonian-animal model. Following the negative γ-IFN response AFSCs injection, apomorphine-induced rotation was reduced by 75% in AFSCs engrafted parkinsonian rats but was increased by 53% in the control group after 12-weeks post-transplantation. The implanted AFSCs were viable, and were able to migrate into the brain's circuitry and express specific proteins of dopamine neurons, such as tyrosine hydroxylase and dopamine transporter. In conclusion, the relative insensitivity AFSCs to γ-IFN implies that AFSCs might have immune-tolerance in γ-IFN inflammatory conditions. Furthermore, the effective improvement of AFSCs transplantation for apomorphine-induced rotation paves the way for the clinical application in parkinsonian therapy.

  16. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad

    2016-09-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method to replace correlations and equations of state in subsurface flow simulators. In order to accelerate MC simulations, a set of early rejection schemes (conservative, hybrid, and non-conservative) in addition to extrapolation methods through reweighting and reconstruction of pre-generated MC Markov chains were developed. Furthermore, an extensive study was conducted to investigate sorption and transport processes of methane, carbon dioxide, water, and their mixtures in the inorganic part of shale using both MC and MD simulations. These simulations covered a wide range of thermodynamic conditions, pore sizes, and fluid compositions shedding light on several interesting findings. For example, the possibility to have more carbon dioxide adsorbed with more preadsorbed water concentrations at relatively large basal spaces. The dissertation is divided into four chapters. The first chapter corresponds to the introductory part where a brief background about molecular simulation and motivations are given. The second chapter is devoted to discuss the theoretical aspects and methodology of the proposed MC speeding up techniques in addition to the corresponding results leading to the successful multi-scale simulation of the compressible single-phase flow scenario. In chapter 3, the results regarding our extensive study on shale gas at laboratory conditions are reported. At the fourth and last chapter, we end the dissertation with few concluding remarks highlighting the key findings and summarizing the future directions.

  17. AN INTEGRATED MODELING FRAMEWORK FOR ENVIRONMENTALLY EFFICIENT CAR OWNERSHIP AND TRIP BALANCE

    Directory of Open Access Journals (Sweden)

    Tao FENG

    2008-01-01

    Full Text Available Urban transport emissions generated by automobile trips are greatly responsible for atmospheric pollution in both developed and developing countries. To match the long-term target of sustainable development, it seems to be important to specify the feasible level of car ownership and travel demand from environmental considerations. This research intends to propose an integrated modeling framework for optimal construction of a comprehensive transportation system by taking into consideration environmental constraints. The modeling system is actually a combination of multiple essential models and illustrated by using a bi-level programming approach. In the upper level, the maximization of both total car ownership and total number of trips by private and public travel modes is set as the objective function and as the constraints, the total emission levels at all the zones are set to not exceed the relating environmental capacities. Maximizing the total trips by private and public travel modes allows policy makers to take into account trip balance to meet both the mobility levels required by travelers and the environmentally friendly transportation system goals. The lower level problem is a combined trip distribution and assignment model incorporating traveler's route choice behavior. A logit-type aggregate modal split model is established to connect the two level problems. In terms of the solution method for the integrated model, a genetic algorithm is applied. A case study is conducted using road network data and person-trip (PT data collected in Dalian city, China. The analysis results showed that the amount of environmentally efficient car ownership and number of trips by different travel modes could be obtained simultaneously when considering the zonal control of environmental capacity within the framework of the proposed integrated model. The observed car ownership in zones could be increased or decreased towards the macroscopic optimization

  18. Analysis of the coupling efficiency of a tapered space receiver with a calculus mathematical model

    Science.gov (United States)

    Hu, Qinggui; Mu, Yining

    2018-03-01

    We establish a calculus mathematical model to study the coupling characteristics of tapered optical fibers in a space communications system, and obtained the coupling efficiency equation. Then, using MATLAB software, the solution was calculated. After this, the sample was produced by the mature flame-brush technique. The experiment was then performed, and the results were in accordance with the theoretical analysis. This shows that the theoretical analysis was correct and indicates that a tapered structure could improve its tolerance with misalignment. Project supported by The National Natural Science Foundation of China (grant no. 61275080); 2017 Jilin Province Science and Technology Development Plan-Science and Technology Innovation Fund for Small and Medium Enterprises (20170308029HJ); ‘thirteen five’ science and technology research project of the Department of Education of Jilin 2016 (16JK009).

  19. Using nested discretization for a detailed yet computationally efficient simulation of local hydrology in a distributed hydrologic model.

    Science.gov (United States)

    Wang, Dongdong; Liu, Yanlan; Kumar, Mukesh

    2018-04-10

    Fully distributed hydrologic models are often used to simulate hydrologic states at fine spatio-temporal resolutions. However, simulations based on these models may become computationally expensive, constraining their applications to smaller domains. This study demonstrates that a nested-discretization based modeling strategy can be used to improve the efficiency of distributed hydrologic simulations, especially for applications where fine resolution estimates of hydrologic states are of the focus only within a part of a watershed. To this end, we consider two applications where the goal is to capture the groundwater dynamics within a defined target area. Our results show that at the target locations, a nested simulation is able to competently replicate the estimates of groundwater table as obtained from the fine simulation, while yielding significant computational savings. The results highlight the potential of using nested discretization for a detailed yet computationally efficient estimation of hydrologic states in part of the model domain.

  20. A Fuzzy Logic Model to Classify Design Efficiency of Nursing Unit Floors

    Directory of Open Access Journals (Sweden)

    Tuğçe KAZANASMAZ

    2010-01-01

    Full Text Available This study was conducted to determine classifications for the planimetric design efficiency of certain public hospitals by developing a fuzzy logic algorithm. Utilizing primary areas and circulation areas from nursing unit floor plans, the study employed triangular membership functions for the fuzzy subsets. The input variables of primary areas per bed and circulation areas per bed were fuzzified in this model. The relationship between input variables and output variable of design efficiency were displayed as a result of fuzzy rules. To test existing nursing unit floors, efficiency output values were obtained and efficiency classes were constructed by this model in accordance with general norms, guidelines and previous studies. The classification of efficiency resulted from the comparison of hospitals.

  1. Fiscal Decentralization and Regional Financial Efficiency: An Empirical Analysis of Spatial Durbin Model

    Directory of Open Access Journals (Sweden)

    Jianmin Liu

    2016-01-01

    Full Text Available Based on panel data covering the period from 2003 to 2012 in China’s 281 prefecture-level cities, we use superefficiency SBM model to measure regional financial efficiency and empirically test the spatial effects of fiscal decentralization on regional financial efficiency with SDM. The estimated results indicate that there exist significant spatial spillover effects among regional financial efficiency with the features of time inertia and spatial dependence. The positive promoting effect of fiscal decentralization on financial efficiency in local region depends on the symmetry between fiscal expenditure decentralization and revenue decentralization. Additionally, there exists inconsistency in the spatial effects of fiscal expenditure decentralization and revenue decentralization on financial efficiency in neighboring regions. The negative effect of fiscal revenue decentralization on financial efficiency in neighboring regions is more significant than that of fiscal expenditure decentralization.

  2. A method to identify energy efficiency measures for factory systems based on qualitative modeling

    CERN Document Server

    Krones, Manuela

    2017-01-01

    Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...

  3. The evaluation model of the enterprise energy efficiency based on DPSR.

    Science.gov (United States)

    Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan

    2017-05-08

    The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.

  4. A Phenomenological Model of Star Formation Efficiency in Dark Matter Halos

    Science.gov (United States)

    Finnegan, Daniel; Alsheshakly, Ghadeer; Moustakas, John

    2018-01-01

    The efficiency of star formation in massive dark matter halos is extraordinarily low, less than 10% in >10^13 Msun sized halos. Although many physical processes have been proposed to explain this low efficiency, such as feedback from supermassive black halos and massive stars, this question remains one of the most important outstanding problems in galaxy evolution. To explore this problem, we build a simple phenomenological model to predict the variations in gas fraction and star formation efficiency as a function of halo mass. We compare our model predictions to central galaxy stellar masses and halo masses drawn from the literature, and discuss plans for our future work.

  5. Fuel Efficient Diesel Particulate Filter (DPF) Modeling and Development

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Mark L.; Gallant, Thomas R.; Kim, Do Heui; Maupin, Gary D.; Zelenyuk, Alla

    2010-08-01

    The project described in this report seeks to promote effective diesel particulate filter technology with minimum fuel penalty by enhancing fundamental understanding of filtration mechanisms through targeted experiments and computer simulations. The overall backpressure of a filtration system depends upon complex interactions of particulate matter and ash with the microscopic pores in filter media. Better characterization of these phenomena is essential for exhaust system optimization. The acicular mullite (ACM) diesel particulate filter substrate is under continuing development by Dow Automotive. ACM is made up of long mullite crystals which intersect to form filter wall framework and protrude from the wall surface into the DPF channels. ACM filters have been demonstrated to effectively remove diesel exhaust particles while maintaining relatively low backpressure. Modeling approaches developed for more conventional ceramic filter materials, such as silicon carbide and cordierite, have been difficult to apply to ACM because of properties arising from its unique microstructure. Penetration of soot into the high-porosity region of projecting crystal structures leads to a somewhat extended depth filtration mode, but with less dramatic increases in pressure drop than are normally observed during depth filtration in cordierite or silicon carbide filters. Another consequence is greater contact between the soot and solid surfaces, which may enhance the action of some catalyst coatings in filter regeneration. The projecting crystals appear to provide a two-fold benefit for maintaining low backpressures during filter loading: they help prevent soot from being forced into the throats of pores in the lower porosity region of the filter wall, and they also tend to support the forming filter cake, resulting in lower average cake density and higher permeability. Other simulations suggest that soot deposits may also tend to form at the tips of projecting crystals due to the axial

  6. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  7. Efficient integrated model predictive control of urban drainage systems using simplified conceptual quality models

    OpenAIRE

    Sun, Congcong; Joseph Duran, Bernat; Maruejouls, Thibaud; Cembrano Gennari, Gabriela; Muñoz, eduard; Messeguer Amela, Jordi; Montserrat, Albert; Sampe, Sara; Puig Cayuela, Vicenç; Litrico, Xavier

    2017-01-01

    Integrated control of urban drainage systems considering urban drainage networks (UDN), wastewater treatment plants (WWTP) and the receiving environment seeks to minimize the impact of combined sewer overflow (CSO) to the receiving environment during wet weather. This paper will show first results of the integrated control of UDN and WWTP, obtained by LIFE-EFFIDRAIN, which is a collaborative project between academia and industry in Barcelona (Spain) and Bordeaux (France). Model predictive con...

  8. Modeling Efficient Water Allocation in a Conjunctive Use Regime : The Indus Basin of Pakistan

    Science.gov (United States)

    O'Mara, Gerald T.; Duloy, John H.

    1984-11-01

    Efficient resource use where ground- and surface waters are used conjunctively may require special policies to rationalize the interaction between water use by farmers and the response of the stream aquifer system. In this paper, we examine alternative policies for achieving more efficient conjunctive use in the Indus Basin of Pakistan. Using a simulation model which links the hydrology of a conjunctive stream aquifer system to an economic model of agricultural production for each of 53 regions of the basin together with a network model of the flows in river reaches, link canals, and irrigation canals, we have studied the joint effect of various canal water allocation and associated private tube well tax or subsidy policies on overall system efficiency. The results suggest that large gains in agricultural production and employment are possible, given more efficient policies.

  9. Modelling the link amongst fine-pore diffuser fouling, oxygen transfer efficiency, and aeration energy intensity.

    Science.gov (United States)

    Garrido-Baserba, Manel; Sobhani, Reza; Asvapathanagul, Pitiporn; McCarthy, Graham W; Olson, Betty H; Odize, Victory; Al-Omari, Ahmed; Murthy, Sudhir; Nifong, Andrea; Godwin, Johnnie; Bott, Charles B; Stenstrom, Michael K; Shaw, Andrew R; Rosso, Diego

    2017-03-15

    This research systematically studied the behavior of aeration diffuser efficiency over time, and its relation to the energy usage per diffuser. Twelve diffusers were selected for a one year fouling study. Comprehensive aeration efficiency projections were carried out in two WRRFs with different influent rates, and the influence of operating conditions on aeration diffusers' performance was demonstrated. This study showed that the initial energy use, during the first year of operation, of those aeration diffusers located in high rate systems (with solids retention time - SRT-less than 2 days) increased more than 20% in comparison to the conventional systems (2 > SRT). Diffusers operating for three years in conventional systems presented the same fouling characteristics as those deployed in high rate processes for less than 15 months. A new procedure was developed to accurately project energy consumption on aeration diffusers; including the impacts of operation conditions, such SRT and organic loading rate, on specific aeration diffusers materials (i.e. silicone, polyurethane, EPDM, ceramic). Furthermore, it considers the microbial colonization dynamics, which successfully correlated with the increase of energy consumption (r 2 :0.82 ± 7). The presented energy model projected the energy costs and the potential savings for the diffusers after three years in operation in different operating conditions. Whereas the most efficient diffusers provided potential costs spanning from 4900 USD/Month for a small plant (20 MGD, or 74,500 m 3 /d) up to 24,500 USD/Month for a large plant (100 MGD, or 375,000 m 3 /d), other diffusers presenting less efficiency provided spans from 18,000USD/Month for a small plant to 90,000 USD/Month for large plants. The aim of this methodology is to help utilities gain more insight into process mechanisms and design better energy efficiency strategies at existing facilities to reduce energy consumption. Copyright © 2016 Elsevier Ltd. All

  10. Higher-fidelity yet efficient modeling of radiation energy transport through three-dimensional clouds

    International Nuclear Information System (INIS)

    Hall, M.L.; Davis, A.B.

    2005-01-01

    Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown

  11. Efficient Estimation of the Cox Model With Auxiliary Subgroup Survival Information.

    Science.gov (United States)

    Huang, Chiung-Yu; Qin, Jing; Tsai, Huei-Ting

    2016-01-01

    With the rapidly increasing availability of data in the public domain, combining information from different sources to infer about associations or differences of interest has become an emerging challenge to researchers. This paper presents a novel approach to improve efficiency in estimating the survival time distribution by synthesizing information from the individual-level data with t -year survival probabilities from external sources such as disease registries. While disease registries provide accurate and reliable overall survival statistics for the disease population, critical pieces of information that influence both choice of treatment and clinical outcomes usually are not available in the registry database. To combine with the published information, we propose to summarize the external survival information via a system of nonlinear population moments and estimate the survival time model using empirical likelihood methods. The proposed approach is more flexible than the conventional meta-analysis in the sense that it can automatically combine survival information for different subgroups and the information may be derived from different studies. Moreover, an extended estimator that allows for a different baseline risk in the aggregate data is also studied. Empirical likelihood ratio tests are proposed to examine whether the auxiliary survival information is consistent with the individual-level data. Simulation studies show that the proposed estimators yield a substantial gain in efficiency over the conventional partial likelihood approach. Two sets of data analysis are conducted to illustrate the methods and theory.

  12. An efficient analytical model for baffled, multi-celled membrane-type acoustic metamaterial panels

    Science.gov (United States)

    Langfeldt, F.; Gleine, W.; von Estorff, O.

    2018-03-01

    A new analytical model for the oblique incidence sound transmission loss prediction of baffled panels with multiple subwavelength sized membrane-type acoustic metamaterial (MAM) unit cells is proposed. The model employs a novel approach via the concept of the effective surface mass density and approximates the unit cell vibrations in the form of piston-like displacements. This yields a coupled system of linear equations that can be solved efficiently using well-known solution procedures. A comparison with results from finite element model simulations for both normal and diffuse field incidence shows that the analytical model delivers accurate results as long as the edge length of the MAM unit cells is smaller than half the acoustic wavelength. The computation times for the analytical calculations are 100 times smaller than for the numerical simulations. In addition to that, the effect of flexible MAM unit cell edges compared to the fixed edges assumed in the analytical model is studied numerically. It is shown that the compliance of the edges has only a small impact on the transmission loss of the panel, except at very low frequencies in the stiffness-controlled regime. The proposed analytical model is applied to investigate the effect of variations of the membrane prestress, added mass, and mass eccentricity on the diffuse transmission loss of a MAM panel with 120 unit cells. Unlike most previous investigations of MAMs, these results provide a better understanding of the acoustic performance of MAMs under more realistic conditions. For example, it is shown that by varying these parameters deliberately in a checkerboard pattern, a new anti-resonance with large transmission loss values can be introduced. A random variation of these parameters, on the other hand, is shown to have only little influence on the diffuse transmission loss, as long as the standard deviation is not too large. For very large random variations, it is shown that the peak transmission loss

  13. How efficiently do corn- and soybean-based cropping systems use water? A systems modeling analysis.

    Science.gov (United States)

    Dietzel, Ranae; Liebman, Matt; Ewing, Robert; Helmers, Matt; Horton, Robert; Jarchow, Meghann; Archontoulis, Sotirios

    2016-02-01

    Agricultural systems are being challenged to decrease water use and increase production while climate becomes more variable and the world's population grows. Low water use efficiency is traditionally characterized by high water use relative to low grain production and usually occurs under dry conditions. However, when a cropping system fails to take advantage of available water during wet conditions, this is also an inefficiency and is often detrimental to the environment. Here, we provide a systems-level definition of water use efficiency (sWUE) that addresses both production and environmental quality goals through incorporating all major system water losses (evapotranspiration, drainage, and runoff). We extensively calibrated and tested the Agricultural Production Systems sIMulator (APSIM) using 6 years of continuous crop and soil measurements in corn- and soybean-based cropping systems in central Iowa, USA. We then used the model to determine water use, loss, and grain production in each system and calculated sWUE in years that experienced drought, flood, or historically average precipitation. Systems water use efficiency was found to be greatest during years with average precipitation. Simulation analysis using 28 years of historical precipitation data, plus the same dataset with ± 15% variation in daily precipitation, showed that in this region, 430 mm of seasonal (planting to harvesting) rainfall resulted in the optimum sWUE for corn, and 317 mm for soybean. Above these precipitation levels, the corn and soybean yields did not increase further, but the water loss from the system via runoff and drainage increased substantially, leading to a high likelihood of soil, nutrient, and pesticide movement from the field to waterways. As the Midwestern United States is predicted to experience more frequent drought and flood, inefficiency of cropping systems water use will also increase. This work provides a framework to concurrently evaluate production and

  14. Marker encoded fringe projection profilometry for efficient 3D model acquisition.

    Science.gov (United States)

    Budianto, B; Lun, P K D; Hsung, Tai-Chiu

    2014-11-01

    This paper presents a novel marker encoded fringe projection profilometry (FPP) scheme for efficient 3-dimensional (3D) model acquisition. Traditional FPP schemes can introduce large errors to the reconstructed 3D model when the target object has an abruptly changing height profile. For the proposed scheme, markers are encoded in the projected fringe pattern to resolve the ambiguities in the fringe images due to that problem. Using the analytic complex wavelet transform, the marker cue information can be extracted from the fringe image, and is used to restore the order of the fringes. A series of simulations and experiments have been carried out to verify the proposed scheme. They show that the proposed method can greatly improve the accuracy over the traditional FPP schemes when reconstructing the 3D model of objects with abruptly changing height profile. Since the scheme works directly in our recently proposed complex wavelet FPP framework, it enjoys the same properties that it can be used in real time applications for color objects.

  15. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  16. A mathematical model of capacious and efficient memory that survives trauma

    Science.gov (United States)

    Srivastava, Vipin; Edwards, S. F.

    2004-02-01

    The brain's memory system can store without any apparent constraint, it recalls stored information efficiently and it is robust against lesion. Existing models of memory do not fully account for all these features. The model due to Hopfield (Proc. Natl. Acad. Sci. USA 79 (1982) 2554) based on Hebbian learning (The Organization of Behaviour, Wiley, New York, 1949) shows an early saturation of memory with the retrieval from memory becoming slow and unreliable before collapsing at this limit. Our hypothesis (Physica A 276 (2000) 352) that the brain might store orthogonalized information improved the situation in many ways but was still constrained in that the information to be stored had to be linearly independent, i.e., signals that could be expressed as linear combinations of others had to be excluded. Here we present a model that attempts to address the problem quite comprehensively in the background of the above attributes of the brain. We demonstrate that if the brain devolves incoming signals in analogy with Fourier analysis, the noise created by interference of stored signals diminishes systematically (which yields prompt retrieval) and most importantly it can withstand partial damages to the brain.

  17. Is the Langevin phase equation an efficient model for oscillating neurons?

    International Nuclear Information System (INIS)

    Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi

    2009-01-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  18. Multiple regression models for the prediction of the maximum obtainable thermal efficiency of organic Rankine cycles

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit

    2014-01-01

    to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...... conditions of the process. Hundreds of optimised cases with varied design parameters are used as observations in four multiple regression analyses. We analyse the model assumptions, prediction abilities and extrapolations, and compare the results with recent studies in the literature. The models...

  19. An efficient flexible-order model for coastal and ocean water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Bingham, Harry B.; Lindberg, Ole

    Current work are directed toward the development of an improved numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included for flexibility in the description of s...

  20. Efficient uncertainty quantification methodologies for high-dimensional climate land models

    Energy Technology Data Exchange (ETDEWEB)

    Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2011-11-01

    In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.

  1. Multilevel systems biology modeling characterized the atheroprotective efficiencies of modified dairy fats in a hamster model.

    Science.gov (United States)

    Martin, Jean-Charles; Berton, Amélie; Ginies, Christian; Bott, Romain; Scheercousse, Pierre; Saddi, Alessandra; Gripois, Daniel; Landrier, Jean-François; Dalemans, Daniel; Alessi, Marie-Christine; Delplanque, Bernadette

    2015-09-01

    We assessed the atheroprotective efficiency of modified dairy fats in hyperlipidemic hamsters. A systems biology approach was implemented to reveal and quantify the dietary fat-related components of the disease. Three modified dairy fats (40% energy) were prepared from regular butter by mixing with a plant oil mixture, by removing cholesterol alone, or by removing cholesterol in combination with reducing saturated fatty acids. A plant oil mixture and a regular butter were used as control diets. The atherosclerosis severity (aortic cholesteryl-ester level) was higher in the regular butter-fed hamsters than in the other four groups (P metabolism" appeared central to atherogenic development relative to diets. The "vitamin E metabolism" cluster was the main driver of atheroprotection with the best performing transformed dairy fat. Under conditions that promote atherosclerosis, the impact of dairy fats on atherogenesis could be greatly ameliorated by technological modifications. Our modeling approach allowed for identifying and quantifying the contribution of complex factors to atherogenic development in each dietary setup. Copyright © 2015 the American Physiological Society.

  2. Evaluating the Efficiency of a Multi-core Aware Multi-objective Optimization Tool for Calibrating the SWAT Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-08-20

    The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.

  3. An efficient Lagrangian stochastic model of vertical dispersion in the convective boundary layer

    Science.gov (United States)

    Franzese, Pasquale; Luhar, Ashok K.; Borgas, Michael S.

    We consider the one-dimensional case of vertical dispersion in the convective boundary layer (CBL) assuming that the turbulence field is stationary and horizontally homogeneous. The dispersion process is simulated by following Lagrangian trajectories of many independent tracer particles in the turbulent flow field, leading to a prediction of the mean concentration. The particle acceleration is determined using a stochastic differential equation, assuming that the joint evolution of the particle velocity and position is a Markov process. The equation consists of a deterministic term and a random term. While the formulation is standard, attention has been focused in recent years on various ways of calculating the deterministic term using the well-mixed condition incorporating the Fokker-Planck equation. Here we propose a simple parameterisation for the deterministic acceleration term by approximating it as a quadratic function of velocity. Such a function is shown to represent well the acceleration under moderate velocity skewness conditions observed in the CBL. The coefficients in the quadratic form are determined in terms of given turbulence statistics by directly integrating the Fokker-Planck equation. An advantage of this approach is that, unlike in existing Lagrangian stochastic models for the CBL, the use of the turbulence statistics up to the fourth order can be made without assuming any predefined form for the probability distribution function (PDF) of the velocity. The main strength of the model, however, lies in its simplicity and computational efficiency. The dispersion results obtained from the new model are compared with existing laboratory data as well as with those obtained from a more complex Lagrangian model in which the deterministic acceleration term is based on a bi-Gaussian velocity PDF. The comparison shows that the new model performs well.

  4. Design and modeling of an SJ infrared solar cell approaching upper limit of theoretical efficiency

    Science.gov (United States)

    Sahoo, G. S.; Mishra, G. P.

    2018-01-01

    Recent trends of photovoltaics account for the conversion efficiency limit making them more cost effective. To achieve this we have to leave the golden era of silicon cell and make a path towards III-V compound semiconductor groups to take advantages like bandgap engineering by alloying these compounds. In this work we have used a low bandgap GaSb material and designed a single junction (SJ) cell with a conversion efficiency of 32.98%. SILVACO ATLAS TCAD simulator has been used to simulate the proposed model using both Ray Tracing and Transfer Matrix Method (under 1 sun and 1000 sun of AM1.5G spectrum). A detailed analyses of photogeneration rate, spectral response, potential developed, external quantum efficiency (EQE), internal quantum efficiency (IQE), short-circuit current density (JSC), open-circuit voltage (VOC), fill factor (FF) and conversion efficiency (η) are discussed. The obtained results are compared with previously reported SJ solar cell reports.

  5. Loss-efficiency model of single and variable-speed compressors using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liang [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); China R and D Center, Carrier Corporation, No.3239 Shen Jiang Road, Shanghai 201206 (China); Zhao, Ling-Xiao; Gu, Bo [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No.3239 Shen Jiang Road, Shanghai 201206 (China)

    2009-09-15

    Compressor is the critical component to the performance of a vapor-compression refrigeration system. The loss-efficiency model including the volumetric efficiency and the isentropic efficiency is widely used for representing the compressor performance. A neural network loss-efficiency model is developed to simulate the performance of positive displacement compressors like the reciprocating, screw and scroll compressors. With one more input, frequency, it can be easily extended to the variable speed compressors. The three-layer polynomial perceptron network is developed because the polynomial transfer function is found very effective in training and free of over-learning. The selection of input parameters of neural networks is also found critical to the network prediction accuracy. The proposed neural networks give less than 0.4% standard deviations and {+-}1.3% maximum deviations against the manufacturer data. (author)

  6. Improving efficiency assessments using additive data envelopment analysis models: an application to contrasting dairy farming systems

    Directory of Open Access Journals (Sweden)

    Andreas Diomedes Soteriades

    2015-10-01

    Full Text Available Applying holistic indicators to assess dairy farm efficiency is essential for sustainable milk production. Data Envelopment Analysis (DEA has been instrumental for the calculation of such indicators. However, ‘additive’ DEA models have been rarely used in dairy research. This study presented an additive model known as slacks-based measure (SBM of efficiency and its advantages over DEA models used in most past dairy studies. First, SBM incorporates undesirable outputs as actual outputs of the production process. Second, it identifies the main production factors causing inefficiency. Third, these factors can be ‘priced’ to estimate the cost of inefficiency. The value of SBM for efficiency analyses was demonstrated with a comparison of four contrasting dairy management systems in terms of technical and environmental efficiency. These systems were part of a multiple-year breeding and feeding systems experiment (two genetic lines: select vs. control; and two feeding strategies: high forage vs. low forage, where the latter involved a higher proportion of concentrated feeds where detailed data were collected to strict protocols. The select genetic herd was more technically and environmentally efficient than the control herd, regardless of feeding strategy. However, the efficiency performance of the select herd was more volatile from year to year than that of the control herd. Overall, technical and environmental efficiency were strongly and positively correlated, suggesting that when technically efficient, the four systems were also efficient in terms of undesirable output reduction. Detailed data such as those used in this study are increasingly becoming available for commercial herds through precision farming. Therefore, the methods presented in this study are growing in importance.

  7. A Sharable and Efficient Metadata Model for Heterogeneous Earth Observation Data Retrieval in Multi-Scale Flood Mapping

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2015-07-01

    Full Text Available Remote sensing plays an important role in flood mapping and is helping advance flood monitoring and management. Multi-scale flood mapping is necessary for dividing floods into several stages for comprehensive management. However, existing data systems are typically heterogeneous owing to the use of different access protocols and archiving metadata models. In this paper, we proposed a sharable and efficient metadata model (APEOPM for constructing an Earth observation (EO data system to retrieve remote sensing data for flood mapping. The proposed model contains two sub-models, an access protocol model and an enhanced encoding model. The access protocol model helps unify heterogeneous access protocols and can achieve intelligent access via a semantic enhancement method. The enhanced encoding model helps unify a heterogeneous archiving metadata model. Wuhan city, one of the most important cities in the Yangtze River Economic Belt in China, is selected as a study area for testing the retrieval of heterogeneous EO data and flood mapping. The past torrential rain period from 25 March 2015 to 10 April 2015 is chosen as the temporal range in this study. To aid in comprehensive management, mapping is conducted at different spatial and temporal scales. In addition, the efficiency of data retrieval is analyzed, and validation between the flood maps and actual precipitation was conducted. The results show that the flood map coincided with the actual precipitation.

  8. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    Science.gov (United States)

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  9. Modeling Relationships between Shrubland Biomass and Pattern Water Use Efficiency Along Semi-Arid Climatic Gradients

    Science.gov (United States)

    Shoshany, Maxim

    2014-05-01

    A new model is presented that represents the effect of the shrub patches' spatial arrangement on their water use efficiency and biomass productivity in water limited ecosystems. The model utilizes an Edge Ratio parameterization calculated as the ratio between vegetation edge area and the total vegetation area. Pattern Water Use Efficiency employs the following relationships: 1. Water Use efficiency would be directly related to shrub Cover Fraction. 2. Water Use Efficiency would be Inversely related to amount of edge areas (Edge Ratio). 3. The effect of Edge would be Inversely related to shrub cover fraction. 4. The effect of Edge would be Inversely related to the shrubs' height. Pattern Water Use Efficiency than modulates the use of precipitation in producing biomass. Preliminary assessment of the new model was achieved by comparing its results with biomass as extracted from high-resolution imagery based on allometric equations for 18 sites along a climatic gradient between Mediterranean and arid regions in Central Israel. In the next phase, the model is modified to allow its implementation with Landsat Imagery. This form of the model facilitated wide regional mapping of shrublands' biomass. Such mapping is fundamental for assessing the impacts of climate change on ecosystems productivity in desert fringe ecosystems.

  10. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be......Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations...

  11. Efficient pan-European river flood hazard modelling through a combination of statistical and physical models

    Science.gov (United States)

    Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.

    2017-07-01

    Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.

  12. Evaluating Technical Efficiency of Nursing Care Using Data Envelopment Analysis and Multilevel Modeling.

    Science.gov (United States)

    Min, Ari; Park, Chang Gi; Scott, Linda D

    2016-05-23

    Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.

  13. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  14. Energy efficiency in the industrial sector. Model based analysis of the efficient use of energy in the EU-27 with focus on the industrial sector

    International Nuclear Information System (INIS)

    Kuder, Ralf

    2014-01-01

    of the industry could be split up into energy intensive subsectors where single production processes dominate the energy consumption, and non-energy intensive subsectors. Ways to reduce the energy consumption in the industrial sector are the use of alternative or improved production or cross cutting technologies and the use of energy saving measures to reduce the demand for useable energy. Based on the analysis within this study, 21 % of the current energy consumption of the industrial sector of the EU and 17 % in Germany could be reduced. Based on the extended understanding of energy efficiency, the model based scenario analysis of the European energy system with the further developed energy system model TIMES PanEU shows that the efficient use of energy at an emission reduction level of 75 % is a slightly increasing primary energy consumption. The primary energy consumption is characterised by a diversified energy carrier and technology mix. Renewable energy sources, nuclear energy and CCS play a key role in the long term. In addition the electricity demand in combination with a strong decarbonisation of the electricity generation is increasing constantly. In the industrial sector the emission reduction is driven by the extended use of electricity, CCS and renewables as well as by the use of improved or alternative process and supply technologies with lower specific energy consumption. Thereby the final energy consumption stays almost on a constant level with increasing importance of electricity and biomass. Both regulatory interventions in the electricity sector and energy saving targets on the primary energy demand lead to higher energy system costs and therewith to a decrease of efficiency based on the extended understanding. The energy demand is reduced stronger than it is efficient and the saving targets lead to the extended use of other resources resulting in totally higher costs. The integrated system analysis in this study points out the interactions

  15. A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.

  16. Ultrasound elastography: efficient estimation of tissue displacement using an affine transformation model

    Science.gov (United States)

    Hashemi, Hoda Sadat; Boily, Mathieu; Martineau, Paul A.; Rivaz, Hassan

    2017-03-01

    Ultrasound elastography entails imaging mechanical properties of tissue and is therefore of significant clinical importance. In elastography, two frames of radio-frequency (RF) ultrasound data that are obtained while the tissue is undergoing deformation, and the time-delay estimate (TDE) between the two frames is used to infer mechanical properties of tissue. TDE is a critical step in elastography, and is challenging due to noise and signal decorrelation. This paper presents a novel and robust technique TDE using all samples of RF data simultaneously. We assume tissue deformation can be approximated by an affine transformation, and hence call our method ATME (Affine Transformation Model Elastography). The affine transformation model is utilized to obtain initial estimates of axial and lateral displacement fields. The affine transformation only has six degrees of freedom (DOF), and as such, can be efficiently estimated. A nonlinear cost function that incorporates similarity of RF data intensity and prior information of displacement continuity is formulated to fine-tune the initial affine deformation field. Optimization of this function involves searching for TDE of all samples of the RF data. The optimization problem is converted to a sparse linear system of equations, which can be solved in real-time. Results on simulation are presented for validation. We further collect RF data from in-vivo patellar tendon and medial collateral ligament (MCL), and show that ATME can be used to accurately track tissue displacement.

  17. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  18. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    Science.gov (United States)

    Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-09-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.

  19. Charge collection efficiency degradation induced by MeV ions in semiconductor devices: Model and experiment

    Energy Technology Data Exchange (ETDEWEB)

    Vittone, E., E-mail: ettore.vittone@unito.it [Department of Physics, NIS Research Centre and CNISM, University of Torino, via P. Giuria 1, 10125 Torino (Italy); Pastuovic, Z. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Breese, M.B.H. [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, Singapore 117542 (Singapore); Garcia Lopez, J. [Centro Nacional de Aceleradores (CNA), Sevilla University, J. Andalucia, CSIC, Av. Thomas A. Edison 7, 41092 Sevilla (Spain); Jaksic, M. [Department for Experimental Physics, Ruder Boškovic Institute (RBI), P.O. Box 180, 10002 Zagreb (Croatia); Raisanen, J. [Department of Physics, University of Helsinki, Helsinki 00014 (Finland); Siegele, R. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Simon, A. [International Atomic Energy Agency (IAEA), Vienna International Centre, P.O. Box 100, 1400 Vienna (Austria); Institute of Nuclear Research of the Hungarian Academy of Sciences (ATOMKI), Debrecen (Hungary); Vizkelethy, G. [Sandia National Laboratories (SNL), PO Box 5800, Albuquerque, NM (United States)

    2016-04-01

    Highlights: • We study the electronic degradation of semiconductors induced by ion irradiation. • The experimental protocol is based on MeV ion microbeam irradiation. • The radiation induced damage is measured by IBIC. • The general model fits the experimental data in the low level damage regime. • Key parameters relevant to the intrinsic radiation hardness are extracted. - Abstract: This paper investigates both theoretically and experimentally the charge collection efficiency (CCE) degradation in silicon diodes induced by energetic ions. Ion Beam Induced Charge (IBIC) measurements carried out on n- and p-type silicon diodes which were previously irradiated with MeV He ions show evidence that the CCE degradation does not only depend on the mass, energy and fluence of the damaging ion, but also depends on the ion probe species and on the polarization state of the device. A general one-dimensional model is derived, which accounts for the ion-induced defect distribution, the ionization profile of the probing ion and the charge induction mechanism. Using the ionizing and non-ionizing energy loss profiles resulting from simulations based on the binary collision approximation and on the electrostatic/transport parameters of the diode under study as input, the model is able to accurately reproduce the experimental CCE degradation curves without introducing any phenomenological additional term or formula. Although limited to low level of damage, the model is quite general, including the displacement damage approach as a special case and can be applied to any semiconductor device. It provides a method to measure the capture coefficients of the radiation induced recombination centres. They can be considered indexes, which can contribute to assessing the relative radiation hardness of semiconductor materials.

  20. Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering

    Science.gov (United States)

    Bodin, P.; Franklin, O.

    2012-04-01

    The separation of global radiation (Rg) into its direct (Rb) and diffuse constituents (Rg) is important when modeling plant photosynthesis because a high Rd:Rg ratio has been shown to enhance Gross Primary Production (GPP). To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977) (GOU). However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering. Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach). Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.

  1. Techniques for managing behaviour in pediatric dentistry: comparative study of live modelling and tell-show-do based on children's heart rates during treatment.

    Science.gov (United States)

    Farhat-McHayleh, Nada; Harfouche, Alice; Souaid, Philippe

    2009-05-01

    Tell-show-do is the most popular technique for managing children"s behaviour in dentists" offices. Live modelling is used less frequently, despite the satisfactory results obtained in studies conducted during the 1980s. The purpose of this study was to compare the effects of these 2 techniques on children"s heart rates during dental treatments, heart rate being the simplest biological parameter to measure and an increase in heart rate being the most common physiologic indicator of anxiety and fear. For this randomized, controlled, parallel-group single-centre clinical trial, children 5 to 9 years of age presenting for the first time to the Saint Joseph University dental care centre in Beirut, Lebanon, were divided into 3 groups: those in groups A and B were prepared for dental treatment by means of live modelling, the mother serving as the model for children in group A and the father as the model for children in group B. The children in group C were prepared by a pediatric dentist using the tell-show-do method. Each child"s heart rate was monitored during treatment, which consisted of an oral examination and cleaning. A total of 155 children met the study criteria and participated in the study. Children who received live modelling with the mother as model had lower heart rates than those who received live modelling with the father as model and those who were prepared by the tell-show-do method (p pediatric dentistry.

  2. Empirical Study on Total Factor Productive Energy Efficiency in Beijing-Tianjin-Hebei Region-Analysis based on Malmquist Index and Window Model

    Science.gov (United States)

    Xu, Qiang; Ding, Shuai; An, Jingwen

    2017-12-01

    This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.

  3. Advanced interface modelling of n-Si/HNO3 doped graphene solar cells to identify pathways to high efficiency

    Science.gov (United States)

    Zhao, Jing; Ma, Fa-Jun; Ding, Ke; Zhang, Hao; Jie, Jiansheng; Ho-Baillie, Anita; Bremner, Stephen P.

    2018-03-01

    In graphene/silicon solar cells, it is crucial to understand the transport mechanism of the graphene/silicon interface to further improve power conversion efficiency. Until now, the transport mechanism has been predominantly simplified as an ideal Schottky junction. However, such an ideal Schottky contact is never realised experimentally. According to literature, doped graphene shows the properties of a semiconductor, therefore, it is physically more accurate to model graphene/silicon junction as a Heterojunction. In this work, HNO3-doped graphene/silicon solar cells were fabricated with the power conversion efficiency of 9.45%. Extensive characterization and first-principles calculations were carried out to establish an advanced technology computer-aided design (TCAD) model, where p-doped graphene forms a straddling heterojunction with the n-type silicon. In comparison with the simple Schottky junction models, our TCAD model paves the way for thorough investigation on the sensitivity of solar cell performance to graphene properties like electron affinity. According to the TCAD heterojunction model, the cell performance can be improved up to 22.5% after optimizations of the antireflection coatings and the rear structure, highlighting the great potentials for fabricating high efficiency graphene/silicon solar cells and other optoelectronic devices.

  4. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  5. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Directory of Open Access Journals (Sweden)

    Shan Zhong

    2016-01-01

    Full Text Available To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR and linear function approximation (LFA, respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.

  6. Analysis of Low-Carbon Economy Efficiency of Chinese Industrial Sectors Based on a RAM Model with Undesirable Outputs

    Directory of Open Access Journals (Sweden)

    Ming Meng

    2017-03-01

    Full Text Available Industrial energy and environment efficiency evaluation become especially crucial as industrial sectors play a key role in CO2 emission reduction and energy consumption. This study adopts the additive range-adjusted measure data envelope analysis (RAM-DEA model to estimate the low-carbon economy efficiency of Chinese industrial sectors in 2001–2013. In addition, the CO2 emission intensity mitigation target for each industrial sector is assigned. Results show that, first, most sectors are not completely efficient, but they have experienced and have improved greatly during the period. These sectors can be divided into four categories, namely, mining, light, heavy, and electricity, gas, and water supply industries. The efficiency is diverse among the four industrial categories. The average efficiency of the light industry is the highest among the industries, followed by those of the mining and the electricity, gas, and water supply industries, and that of the heavy industry is the lowest. Second, the electricity, gas, and water supply industry shows the biggest potential for CO2 emission reduction, thus containing most of the sectors with large CO2 emission intensity mitigation targets (more than 45%, followed by the mining and the light industries. Therefore, the Chinese government should formulate diverse and flexible policy implementations according to the actual situation of the different sectors. Specifically, the sectors with low efficiency should be provided with additional policy support (such as technology and finance aids to improve their industrial efficiency, whereas the electricity, gas, and water supply industry should maximize CO2 emission reduction.

  7. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  8. ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation

    Directory of Open Access Journals (Sweden)

    Behzad Fakari Sardehae

    2016-09-01

    . This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not

  9. A statistical light use efficiency model explains 85% variations in global GPP

    Science.gov (United States)

    Jiang, C.; Ryu, Y.

    2016-12-01

    Photosynthesis is a complicated process whose modeling requires different levels of assumptions, simplification, and parameterization. Among models, light use efficiency (LUE) model is highly compact but powerful in monitoring gross primary production (GPP) from satellite data. Most of LUE models adopt a multiplicative from of maximum LUE, absorbed photosynthetically active radiation (APAR), and temperature and water stress functions. However, maximum LUE is a fitting parameter with large spatial variations, but most studies only use several biome dependent constants. In addition, stress functions are empirical and arbitrary in literatures. Moreover, meteorological data used are usually coarse-resolution, e.g., 1°, which could cause large errors. Finally, sunlit and shade canopy have completely different light responses but little considered. Targeting these issues, we derived a new statistical LUE model from a process-based and satellite-driven model, the Breathing Earth System Simulator (BESS). We have already derived a set of global radiation (5-km resolution), carbon and water fluxes (1-km resolution) products from 2000 to 2015 from BESS. By exploring these datasets, we found strong correlation between APAR and GPP for sunlit (R2=0.84) and shade (R2=0.96) canopy, respectively. A simple model, only driven by sunlit and shade APAR, was thus built based on linear relationships. The slopes of the linear function act as effective LUE of global ecosystem, with values of 0.0232 and 0.0128 umol C/umol quanta for sunlit and shade canopy, respectively. When compared with MPI-BGC GPP products, a global proxy of FLUXNET data, BESS-LUE achieved an overall accuracy of R2 = 0.85, whereas original BESS was R2 = 0.83 and MODIS GPP product was R2 = 0.76. We investigated spatiotemporal variations of the effective LUE. Spatially, the ratio of sunlit to shade values ranged from 0.1 (wet tropic) to 4.5 (dry inland). By using maps of sunlit and shade effective LUE the accuracy of

  10. Total-Factor Energy Efficiency in BRI Countries: An Estimation Based on Three-Stage DEA Model

    Directory of Open Access Journals (Sweden)

    Changhong Zhao

    2018-01-01

    Full Text Available The Belt and Road Initiative (BRI is showing its great influence and leadership on the international energy cooperation. Based on the three-stage DEA model, total-factor energy efficiency (TFEE in 35 BRI countries in 2015 was measured in this article. It shows that the three-stage DEA model could eliminate errors of environment variable and random, which made the result better than traditional DEA model. When environment variable errors and random errors were eliminated, the mean value of TFEE was declined. It demonstrated that TFEE of the whole sample group was overestimated because of external environment impacts and random errors. The TFEE indicators of high-income countries like South Korea, Singapore, Israel and Turkey are 1, which is in the efficiency frontier. The TFEE indicators of Russia, Saudi Arabia, Poland and China are over 0.8. And the indicators of Uzbekistan, Ukraine, South Africa and Bulgaria are in a low level. The potential of energy-saving and emissions reduction is great in countries with low TFEE indicators. Because of the gap in energy efficiency, it is necessary to distinguish different countries in the energy technology options, development planning and regulation in BRI countries.

  11. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  12. Impact of rainfall spatial distribution on rainfall-runoff modelling efficiency and initial soil moisture conditions estimation

    Directory of Open Access Journals (Sweden)

    Y. Tramblay

    2011-01-01

    Full Text Available A good knowledge of rainfall is essential for hydrological operational purposes such as flood forecasting. The objective of this paper was to analyze, on a relatively large sample of flood events, how rainfall-runoff modeling using an event-based model can be sensitive to the use of spatial rainfall compared to mean areal rainfall over the watershed. This comparison was based not only on the model's efficiency in reproducing the flood events but also through the estimation of the initial conditions by the model, using different rainfall inputs. The initial conditions of soil moisture are indeed a key factor for flood modeling in the Mediterranean region. In order to provide a soil moisture index that could be related to the initial condition of the model, the soil moisture output of the Safran-Isba-Modcou (SIM model developed by Météo-France was used. This study was done in the Gardon catchment (545 km2 in South France, using uniform or spatial rainfall data derived from rain gauge and radar for 16 flood events. The event-based model considered combines the SCS runoff production model and the Lag and Route routing model. Results show that spatial rainfall increases the efficiency of the model. The advantage of using spatial rainfall is marked for some of the largest flood events. In addition, the relationship between the model's initial condition and the external predictor of soil moisture provided by the SIM model is better when using spatial rainfall, in particular when using spatial radar data with R2 values increasing from 0.61 to 0.72.

  13. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    International Nuclear Information System (INIS)

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.; Marks, Christopher A.; Jassby, David

    2017-01-01

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSE of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.

  14. Efficiency of alternative MCMC strategies illustrated using the reaction norm model.

    Science.gov (United States)

    Shariati, M; Sorensen, D

    2008-06-01

    The Markov chain Monte Carlo (MCMC) strategy provides remarkable flexibility for fitting complex hierarchical models. However, when parameters are highly correlated in their posterior distributions and their number is large, a particular MCMC algorithm may perform poorly and the resulting inferences may be affected. The objective of this study was to compare the efficiency (in terms of the asymptotic variance of features of posterior distributions of chosen parameters, and in terms of computing cost) of six MCMC strategies to sample parameters using simulated data generated with a reaction norm model with unknown covariates as an example. The six strategies are single-site Gibbs updates (SG), single-site Gibbs sampler for updating transformed (a priori independent) additive genetic values (TSG), pairwise Gibbs updates (PG), blocked (all location parameters are updated jointly) Gibbs updates (BG), Langevin-Hastings (LH) proposals, and finally Langevin-Hastings proposals for updating transformed additive genetic values (TLH). The ranking of the methods in terms of asymptotic variance is affected by the degree of the correlation structure of the data and by the true values of the parameters, and no method comes out as an overall winner across all parameters. TSG and BG show very good performance in terms of asymptotic variance especially when the posterior correlation between genetic effects is high. In terms of computing cost, TSG performs best except for dispersion parameters in the low correlation scenario where SG was the best strategy. The two LH proposals could not compete with any of the Gibbs sampling algorithms. In this study it was not possible to find an MCMC strategy that performs optimally across the range of target distributions and across all possible values of parameters. However, when the posterior correlation between parameters is high, TSG, BG and even PG show better mixing than SG.

  15. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  16. Health effects of home energy efficiency interventions in England: a modelling study

    Science.gov (United States)

    Milner, James; Chalabi, Zaid; Das, Payel; Jones, Benjamin; Shrubsole, Clive; Davies, Mike; Wilkinson, Paul

    2015-01-01

    Objective To assess potential public health impacts of changes to indoor air quality and temperature due to energy efficiency retrofits in English dwellings to meet 2030 carbon reduction targets. Design Health impact modelling study. Setting England. Participants English household population. Intervention Three retrofit scenarios were modelled: (1) fabric and ventilation retrofits installed assuming building regulations are met; (2) as with scenario (1) but with additional ventilation for homes at risk of poor ventilation; (3) as with scenario (1) but with no additional ventilation to illustrate the potential risk of weak regulations and non-compliance. Main outcome Primary outcomes were changes in quality adjusted life years (QALYs) over 50 years from cardiorespiratory diseases, lung cancer, asthma and common mental disorders due to changes in indoor air pollutants, including secondhand tobacco smoke, PM2.5 from indoor and outdoor sources, radon, mould, and indoor winter temperatures. Results The modelling study estimates showed that scenario (1) resulted in positive effects on net mortality and morbidity of 2241 (95% credible intervals (CI) 2085 to 2397) QALYs per 10 000 persons over 50 years follow-up due to improved temperatures and reduced exposure to indoor pollutants, despite an increase in exposure to outdoor-generated particulate matter with a diameter of 2.5 μm or less (PM2.5). Scenario (2) resulted in a negative impact of −728 (95% CI −864 to −592) QALYs per 10 000 persons over 50 years due to an overall increase in indoor pollutant exposures. Scenario (3) resulted in −539 (95% CI −678 to -399) QALYs per 10 000 persons over 50 years follow-up due to an increase in indoor exposures despite the targeting of pollutants. Conclusions If properly implemented alongside ventilation, energy efficiency retrofits in housing can improve health by reducing exposure to cold and air pollutants. Maximising the health benefits requires careful

  17. Solving the dynamic equations of a 3-PRS Parallel Manipulator for efficient model-based designs

    Directory of Open Access Journals (Sweden)

    M. Díaz-Rodríguez

    2016-01-01

    Full Text Available Introduction of parallel manipulator systems for different applications areas has influenced many researchers to develop techniques for obtaining accurate and computational efficient inverse dynamic models. Some subject areas make use of these models, such as, optimal design, parameter identification, model based control and even actuation redundancy approaches. In this context, by revisiting some of the current computationally-efficient solutions for obtaining the inverse dynamic model of parallel manipulators, this paper compares three different methods for inverse dynamic modelling of a general, lower mobility, 3-PRS parallel manipulator. The first method obtains the inverse dynamic model by describing the manipulator as three open kinematic chains. Then, vector-loop closure constraints are introduced for obtaining the relationship between the dynamics of the open kinematic chains (such as a serial robot and the closed chains (such as a parallel robot. The second method exploits certain characteristics of parallel manipulators such that the platform and the links are considered as independent subsystems. The proposed third method is similar to the second method but it uses a different Jacobian matrix formulation in order to reduce computational complexity. Analysis of these numerical formulations will provide fundamental software support for efficient model-based designs. In addition, computational cost reduction presented in this paper can also be an effective guideline for optimal design of this type of manipulator and for real-time embedded control.

  18. Evaluating transit operator efficiency: An enhanced DEA model with constrained fuzzy-AHP cones

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-06-01

    Full Text Available This study addresses efforts to comb the Analytic Hierarchy Process (AHP with Data Envelopment Analysis (DEA to deliver a robust enhanced DEA model for transit operator efficiency assessment. The proposed model is designed to better capture inherent preferences information over input and output indicators by adding constraint cones to the conventional DEA model. A revised fuzzy-AHP model is employed to generate cones, where the proposed model features the integration of the fuzzy logic with a hierarchical AHP structure to: 1 normalize the scales of different evaluation indicators, 2 construct the matrix of pair-wise comparisons with fuzzy set, and 3 optimize the weight of each criterion with a non-linear programming model. With introduction of cone-based constraints, the new system offers accounting advantages in the interaction among indicators when evaluating the performance of transit operators. To illustrate the applicability of the proposed approach, a real case in Nanjing City, the capital of China's Jiangsu Province, has been selected to assess the efficiencies of seven bus companies based on 2009 and 2010 datasets. A comparison between conventional DEA and enhanced DEA was also conducted to clarify the new system's superiority. Results reveal that the proposed model is more applicable in evaluating transit operator's efficiency thus encouraging a boarder range of applications.

  19. A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls

    Directory of Open Access Journals (Sweden)

    Arun Arjunan

    2015-08-01

    Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.

  20. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    Science.gov (United States)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  1. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras

    OpenAIRE

    Andrade, Jefferson O.; Kameyama, Yukiyoshi

    2012-01-01

    Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In thi...

  2. A branch scale analytical model for predicting the vegetation collection efficiency of ultrafine particles

    Science.gov (United States)

    Lin, M.; Katul, G. G.; Khlystov, A.

    2012-05-01

    The removal of ultrafine particles (UFP) by vegetation is now receiving significant attention given their role in cloud physics, human health and respiratory related diseases. Vegetation is known to be a sink for UFP, prompting interest in their collection efficiency. A number of models have tackled the UFP collection efficiency of an isolated leaf or a flat surface; however, up-scaling these theories to the ecosystem level has resisted complete theoretical treatment. To progress on a narrower scope of this problem, simultaneous experimental and theoretical investigations are carried out at the “intermediate” branch scale. Such a scale retains the large number of leaves and their interaction with the flow without the heterogeneities and added geometric complexities encountered within ecosystems. The experiments focused on the collection efficiencies of UFP in the size range 12.6-102 nm for pine and juniper branches in a wind tunnel facility. Scanning mobility particle sizers were used to measure the concentration of each diameter class of UFP upstream and downstream of the vegetation branches thereby allowing the determination of the UFP vegetation collection efficiencies. The UFP vegetation collection efficiency was measured at different wind speeds (0.3-1.5 m s-1), packing density (i.e. volume fraction of leaf or needle fibers; 0.017 and 0.040 for pine and 0.037, 0.055 for juniper), and branch orientations. These measurements were then used to investigate the performance of a proposed analytical model that predicts the branch-scale collection efficiency using conventional canopy properties such as the drag coefficient and leaf area density. Despite the numerous simplifications employed, the proposed analytical model agreed with the wind tunnel measurements mostly to within 20%. This analytical tractability can benefit future air quality and climate models incorporating UFP.

  3. Design, characterization and modelling of high efficient solar powered lighting systems

    DEFF Research Database (Denmark)

    Svane, Frederik; Nymann, Peter; Poulsen, Peter Behrensdorff

    2016-01-01

    This paper discusses some of the major challenges in the development of L2L (Light-2-Light) products. It’s the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report th...

  4. Model-Based Analysis and Efficient Operation of a Glucose Isomerization Reactor Plant

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Madsen, Ulrich; Pedersen, Sven

    2015-01-01

    efficiency. The objective of this study is the application of the developed framework on an industrial case study of a glucose isomerization (GI) reactor plant that is part of a corn refinery, with the objective to improve the productivity of the process. Therefore, a multi-scale reactor model...

  5. Navigational efficiency in a biased and correlated random walk model of individual animal movement.

    Science.gov (United States)

    Bailey, Joseph D; Wallis, Jamie; Codling, Edward A

    2018-01-01

    Understanding how an individual animal is able to navigate through its environment is a key question in movement ecology that can give insight into observed movement patterns and the mechanisms behind them. Efficiency of navigation is important for behavioral processes at a range of different spatio-temporal scales, including foraging and migration. Random walk models provide a standard framework for modeling individual animal movement and navigation. Here we consider a vector-weighted biased and correlated random walk (BCRW) model for directed movement (taxis), where external navigation cues are balanced with forward persistence. We derive a mathematical approximation of the expected navigational efficiency for any BCRW of this form and confirm the model predictions using simulations. We demonstrate how the navigational efficiency is related to the weighting given to forward persistence and external navigation cues, and highlight the counter-intuitive result that for low (but realistic) levels of error on forward persistence, a higher navigational efficiency is achieved by giving more weighting to this indirect navigation cue rather than direct navigational cues. We discuss and interpret the relevance of these results for understanding animal movement and navigation strategies. © 2017 by the Ecological Society of America.

  6. 78 FR 35073 - Compass Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application

    Science.gov (United States)

    2013-06-11

    ... Balanced Fund, Compass EMP Multi-Asset Growth Fund, Compass EMP Alternative Strategies Fund, Compass EMP Balanced Volatility Weighted Fund, Compass EMP Growth Volatility Weighted Fund, and Compass EMP... Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application June 4, 2013. AGENCY...

  7. Compilation Of An Econometric Human Resource Efficiency Model For Project Management Best Practices

    OpenAIRE

    G. van Zyl; P. Venier

    2006-01-01

    The aim of the paper is to introduce a human resource efficiency model in order to rank the most important human resource driving forces for project management best practices. The results of the model will demonstrate how the human resource component of project management acts as the primary function to enhance organizational performance, codified through improved logical end-state programmes, work ethics and process contributions. Given the hypothesis that project management best practices i...

  8. Efficient Markov Chain Monte Carlo Sampling for Hierarchical Hidden Markov Models

    OpenAIRE

    Turek, Daniel; de Valpine, Perry; Paciorek, Christopher J.

    2016-01-01

    Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchi...

  9. AHS Model: Efficient Topological Operators for a Sensor Web Publish/Subscribe System

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Huang

    2017-02-01

    Full Text Available The Worldwide Sensor Web has been applied for monitoring the physical world with spatial and temporal scales that were impossible in the past. With the development of sensor technologies and interoperable open standards, sensor webs generate tremendous volumes of priceless data, enabling scientists to observe previously unobservable phenomena. With its powerful monitoring capability, the sensor web is able to capture time-critical events and provide up-to-date information to support decision-making. In order to harvest the full potential of the sensor web, efficiently processing sensor web data and providing timely notifications are necessary. Therefore, we aim to design a software component applying the publish/subscribe communication model for the sensor web. However, as sensor web data are geospatial in nature, existing topological operators are inefficient when processing a large number of geometries. This paper presents the Aggregated Hierarchical Spatial Model (AHS model to efficiently determine topological relationships between sensor data and predefined query objects. By using a predefined hierarchical spatial framework to index geometries, the AHS model can match new sensor data with all subscriptions in a single process to improve the query performance. Based on our evaluation results, the query latency of the AHS model increases 2.5 times more slowly than that of PostGIS. As a result, we believe that the AHS model is able to more efficiently process topological operators in a sensor web publish/subscribe system.

  10. An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Application

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  11. An Efficient Modelling Approach for Prediction of Porosity Severity in Composite Structures

    Science.gov (United States)

    Bedayat, Houman; Forghani, Alireza; Hickmott, Curtis; Roy, Martin; Palmieri, Frank; Grimsley, Brian; Coxon, Brian; Fernlund, Goran

    2017-01-01

    Porosity, as a manufacturing process-induced defect, highly affects the mechanical properties of cured composites. Multiple phenomena affect the formation of porosity during the cure process. Porosity sources include entrapped air, volatiles and off-gassing as well as bag and tool leaks. Porosity sinks are the mechanisms that contribute to reducing porosity, including gas transport, void shrinkage and collapse as well as resin flow into void space. Despite the significant progress in porosity research, the fundamentals of porosity in composites are not yet fully understood. The highly coupled multi-physics and multi-scale nature of porosity make it a complicated problem to predict. Experimental evidence shows that resin pressure history throughout the cure cycle plays an important role in the porosity of the cured part. Maintaining high resin pressure results in void shrinkage and collapse keeps volatiles in solution thus preventing off-gassing and bubble formation. This study summarizes the latest development of an efficient FE modeling framework to simulate the gas and resin transport mechanisms that are among the major phenomena contributing to porosity.

  12. Generalized linear discriminant analysis: a unified framework and efficient model selection.

    Science.gov (United States)

    Ji, Shuiwang; Ye, Jieping

    2008-10-01

    High-dimensional data are common in many domains, and dimensionality reduction is the key to cope with the curse-of-dimensionality. Linear discriminant analysis (LDA) is a well-known method for supervised dimensionality reduction. When dealing with high-dimensional and low sample size data, classical LDA suffers from the singularity problem. Over the years, many algorithms have been developed to overcome this problem, and they have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed, which elucidates the properties of various algorithms and their relationships. Based on the proposed framework, we show that the matrix computations involved in LDA-based algorithms can be simplified so that the cross-validation procedure for model selection can be performed efficiently. We conduct extensive experiments using a collection of high-dimensional data sets, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms.

  13. Shifting Landscapes: The Impact of Centralized and Decentralized Nursing Station Models on the Efficiency of Care.

    Science.gov (United States)

    Fay, Lindsey; Carll-White, Allison; Schadler, Aric; Isaacs, Kathy B; Real, Kevin

    2017-10-01

    The focus of this research was to analyze the impact of decentralized and centralized hospital design layouts on the delivery of efficient care and the resultant level of caregiver satisfaction. An interdisciplinary team conducted a multiphased pre- and postoccupancy evaluation of a cardiovascular service line in an academic hospital that moved from a centralized to decentralized model. This study examined the impact of walkability, room usage, allocation of time, and visibility to better understand efficiency in the care environment. A mixed-methods data collection approach was utilized, which included pedometer measurements of staff walking distances, room usage data, time studies in patient rooms and nurses' stations, visibility counts, and staff questionnaires yielding qualitative and quantitative results. Overall, the data comparing the centralized and decentralized models yielded mixed results. This study's centralized design was rated significantly higher in its ability to support teamwork and efficient patient care with decreased staff walking distances. The decentralized unit design was found to positively influence proximity to patients in a larger design footprint and contribute to increased visits to and time spent in patient rooms. Among the factors contributing to caregiver efficiency and satisfaction are nursing station design, an integrated team approach, and the overall physical layout of the space on walkability, allocation of caregiver time, and visibility. However, unit design alone does not solely impact efficiency, suggesting that designers must consider the broader implications of a culture of care and processes.

  14. Modelling university human capital formation and measuring its efficiency: evidence from Florence University

    Directory of Open Access Journals (Sweden)

    Guido Ferrari

    2008-03-01

    Full Text Available In this paper, an analysis of the technical efficiency in the formation of 2,236 graduates in 1998 in the University of Florence, that is, in the university human capital formation, is performed, by modelling the production process as one in which the student produces himself as a graduate. The tool utilized is the DEA methodology, under the hypothesis of variable returns to scale. The production factors are represented by a set of human and capital resources provided by the faculties, along with individual factors represented by secondary school diploma score and by the length of university study. The analysis is conducted both for the overall graduates, and at a faculty level, in order to emphasize the contribution provided by the latter to efficiency. There is evidence that the students graduated with an average efficiency greater than 90% and therefore with an unexploited productive capacity lower than 10%. At a faculty level, Formation Science appears to be the most efficient, whereas Economics is the less efficient one. By and large, the contribution to efficiency provided by faculties is greater than that brought by students individual characteristics.

  15. Efficient extraction of thin-film thermal parameters from numerical models via parametric model order reduction

    NARCIS (Netherlands)

    Bechtold, T.; Hohlfeld, D.; Rudnyi, E.B.; Günther, M.

    2010-01-01

    In this paper we present a novel highly efficient approach to determine material properties from measurement results. We apply our method to thermal properties of thin-film multilayers with three different materials, amorphous silicon, silicon nitride and silicon oxide. The individual material

  16. Efficient Modeling of Coulomb Interaction Effect on Exciton in Crystal-Phase Nanowire Quantum Dot

    DEFF Research Database (Denmark)

    Taherkhani, Masoomeh; Gregersen, Niels; Mørk, Jesper

    2016-01-01

    The binding energy and oscillation strength of the ground-state exciton in type-II quantum dot (QD) is calculated by using a post Hartree-Fock method known as the configuration interaction (CI) method which is significantly more efficient than conventional methods like ab initio method. We show...

  17. Model of Efficiency Assessment of Regulation In The Banking Seсtor

    Directory of Open Access Journals (Sweden)

    Irina V. Larionova

    2014-01-01

    Full Text Available In this article, the modern system of regulation of the national banking sector is viewed, which, according to the author, needs theoretical judgment, structuring, disclosure of the maintenance of efficiency of functioning is considered. The system of regulation reveals on a system basis, it is offered to consider it as set of elements and the mechanism of their interaction which are formed taking into account target reference points of regulation. Thus it is emphasized that for regulation the contradiction is concluded: achievement of financial stability of functioning of the banking sector, as a rule, contains economic growth. The need for development of theoretical ideas of efficiency of regulation of the banking sector gains special relevance taking into account the latest events connected with revocation of licenses of commercial banks on implementation of bank activity, the high cost of credit resources for managing subjects, an insignificant contribution of the banking sector to ensuring rates of economic growth. The author offered criteria of efficiency of regulation of the banking sector to which are referred: functional, operational, social, and economic efficiency. Functional efficiency opens ability of each subsystem of regulation to carry out the functions ordered by the law. Operational efficiency describes correctness suffered by the regulator and commercial banks of the expenses connected with regulating influence. At last, social and economic efficiency is connected with degree of compliance of a field of activity of the banking sector to requirements of national economy, and responsibility of banking business before society. For each criterion of efficiency of regulation of the banking sector the set of the quantitative and quality indicators, allowing to give the corresponding assessment of the working model of crediting is offered. The aggregated expert assessment of the Russian system of regulation of the banking sector

  18. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.

    2017-06-09

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  19. Monitoring Crop Productivity over the U.S. Corn Belt using an Improved Light Use Efficiency Model

    Science.gov (United States)

    Wu, X.; Xiao, X.; Zhang, Y.; Qin, Y.; Doughty, R.

    2017-12-01

    Large-scale monitoring of crop yield is of great significance for forecasting food production and prices and ensuring food security. Satellite data that provide temporally and spatially continuous information that by themselves or in combination with other data or models, raises possibilities to monitor and understand agricultural productivity regionally. In this study, we first used an improved light use efficiency model-Vegetation Photosynthesis Model (VPM) to simulate the gross primary production (GPP). Model evaluation showed that the simulated GPP (GPPVPM) could well captured the spatio-temporal variation of GPP derived from FLUXNET sites. Then we applied the GPPVPM to further monitor crop productivity for corn and soybean over the U.S. Corn Belt and benchmarked with county-level crop yield statistics. We found VPM-based approach provides pretty good estimates (R2 = 0.88, slope = 1.03). We further showed the impacts of climate extremes on the crop productivity and carbon use efficiency. The study indicates the great potential of VPM in estimating crop yield and in understanding of crop yield responses to climate variability and change.

  20. An inverse voter model for co-evolutionary networks: Stationary efficiency and phase transitions

    International Nuclear Information System (INIS)

    Zhu Chenping; Kong Hui; Li Li; Gu Zhiming; Xiong Shijie

    2011-01-01

    In some co-evolutionary networks, the nodes always flip their states between two opposite ones, changing the types of the links to others correspondingly. Meanwhile, the link-rewiring and state-flipping processes feed back each other, and only the links between the nodes in the opposite states are productive in generating flow for the network. We propose an inverse voter model to depict the basic features of them. New phase transitions from full efficiency to deficiency state are found by both the analysis and simulations starting from the random graphs and small-world networks. We suggest a new way to measure the efficiency of networks.

  1. An analytical model of evaporation efficiency for unsaturated soil surfaces with an arbitrary thickness

    OpenAIRE

    Merlin, Olivier; Al Bitar, Ahmad; Rivalland, Vincent; Béziat, Pierre; Ceschia, Eric; Dedieu, Gérard

    2010-01-01

    doi: 10.1175/2010JAMC2418.1; Analytical expressions of evaporative efficiency over bare soil (defined as the ratio of actual to potential soil evaporation) have been limited to soil layers with a fixed depth and/or to specific atmospheric conditions. To fill the gap, a new analytical model is developed for arbitrary soil thicknesses and varying boundary layer conditions. The soil evaporative efficiency is written [0.5 – 0.5 cos(πθL/ θmax)]^P with θL being the water content in the soil layer o...

  2. Numerical modelling of high efficiency InAs/GaAs intermediate band solar cell

    Science.gov (United States)

    Imran, Ali; Jiang, Jianliang; Eric, Debora; Yousaf, Muhammad

    2018-01-01

    Quantum Dots (QDs) intermediate band solar cells (IBSC) are the most attractive candidates for the next generation of photovoltaic applications. In this paper, theoretical model of InAs/GaAs device has been proposed, where we have calculated the effect of variation in the thickness of intrinsic and IB layer on the efficiency of the solar cell using detailed balance theory. IB energies has been optimized for different IB layers thickness. Maximum efficiency 46.6% is calculated for IB material under maximum optical concentration.

  3. A resource allocation model to support efficient air quality management in South Africa

    Directory of Open Access Journals (Sweden)

    U Govender

    2009-06-01

    Full Text Available Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz. National, Provincial, and Local Departments of Environmental Management in South Africa, with the view to develop a resource allocation model. The inputs to the model were calculated from the actual man-hours spent on twelve selected activities relating to project management, knowledge management and change management. The outputs assessed were aligned to the requirements of the mandates of these Departments. Several models were explored using multiple regressions and stepwise techniques. The model that best explained the efficiency of the organisations from the input data was selected. Logistic regression analysis was identified as the most appropriate tool. This model is used to predict the required resources per Air Quality Unit in the different spheres of government in an attempt at supporting and empowering the air quality regime to achieve improved output efficiency.

  4. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  5. Modeling of non-linear CHP efficiency curves in distributed energy systems

    DEFF Research Database (Denmark)

    Milan, Christian; Stadler, Michael; Cardoso, Gonçalo

    2015-01-01

    Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...

  6. Determining the Most Efficient Supplier Considering Imprecise Data in Data Envelopment Analysis (DEA, a extension for Toloo and Nalchigar's model

    Directory of Open Access Journals (Sweden)

    Morteza Rahmani

    2017-03-01

    Full Text Available Supplier selection in supply chain as a multi-criteria decision making problem (containing both qualitative and quantitative criteria is one of the main factors in a successful supply chain. To this purpose, Toloo and Nalchigar (2011 proposed an integrated data envelopment analysis (DEA model to find the most efficient (best supplier by considering imprecise data. In this paper, it will be shown that their model randomly selects an efficient supplier as the most efficient and therefore their model cannot find the most efficient supplier correctly. We also explain some other problems in this model and propose a modified model to resolve the drawbacks. The proposed model in this paper finds the most efficient supplier considering imprecise data by solving only one mixed integer linear programming. In addition, a new algorithm is proposed for determining and ranking other efficient suppliers. Afficiency of the proposed approach is explained by considering imprecise data for 18 suppliers.

  7. Photoacoustic imaging of intravenously injected photosensitizer in rat burn models for efficient antibacterial photodynamic therapy

    Science.gov (United States)

    Tsunoi, Yasuyuki; Sato, Shunichi; Ashida, Hiroshi; Terakawa, Mitsuhiro

    2012-02-01

    For efficient photodynamic treatment of wound infection, a photosensitizer must be distributed in the whole infected tissue region. To ensure this, depth profiling of a photosensitizer is necessary in vivo. In this study, we applied photoacoustic (PA) imaging to visualize the depth profile of an intravenously injected photosensitizer in rat burn models. In burned tissue, pharmacokinetics is complicated; vascular occlusion takes place in the injured tissue, while vascular permeability increases due to thermal invasion. In this study, we first used Evans Blue (EB) as a test drug to examine the feasibility of photosensitizer dosimetry based on PA imaging. On the basis of the results, an actual photosensitizer, talaporfin sodium was used. An EB solution was intravenously injected into a rat deep dermal burn model. PA imaging was performed on the wound with 532 nm and 610 nm nanosecond light pulses for visualizing vasculatures (blood) and EB, respectively. Two hours after injection, the distribution of EB-originated signal spatially coincided well with that of blood-originated signal measured after injury, indicating that EB molecules leaked out from the blood vessels due to increased permeability. Afterwards, the distribution of EB signal was broadened in the depth direction due to diffusion. At 12 hours after injection, clear EB signals were observed even in the zone of stasis, demonstrating that the leaked EB molecules were delivered to the injured tissue layer. The level and time course of talaporfin sodium-originated signals were different compared with those of EB-originated signals, showing animal-dependent and/or drug-dependent permeabilization and diffusion in the tissue. Thus, photosensitizer dosimetry should be needed before every treatment to achieve desirable outcome of photodynamic treatment, for which PA imaging can be concluded to be valid and useful.

  8. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    Science.gov (United States)

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  9. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  10. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  11. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    Science.gov (United States)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  12. Computationally Efficient Modelling of Dynamic Soil-Structure Interaction of Offshore Wind Turbines on Gravity Footings

    DEFF Research Database (Denmark)

    Damgaard, Mads; Andersen, Lars Vabbersgaard; Ibsen, Lars Bo

    2014-01-01

    The formulation and quality of a computationally efficient model of offshore wind turbine surface foundations is examined. The aim is to establish a model, workable in the frequency and time domain, that can be applied in aeroelastic codes for fast and reliable evaluation of the dynamic structural...... to wave propagating in the subsoil–even for soil stratifications with low cut-in frequencies. In this regard, utilising discrete second-order models for the physical interpretation of a rational filter puts special demands on the Newmark β-scheme, where the time integration in most cases only provides...

  13. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    Science.gov (United States)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  14. Spatial Heterodyne Observations of Water (SHOW) vapour in the upper troposphere and lower stratosphere from a high altitude aircraft: Modelling and sensitivity analysis

    Science.gov (United States)

    Langille, J. A.; Letros, D.; Zawada, D.; Bourassa, A.; Degenstein, D.; Solheim, B.

    2018-04-01

    A spatial heterodyne spectrometer (SHS) has been developed to measure the vertical distribution of water vapour in the upper troposphere and the lower stratosphere with a high vertical resolution (∼500 m). The Spatial Heterodyne Observations of Water (SHOW) instrument combines an imaging system with a monolithic field-widened SHS to observe limb scattered sunlight in a vibrational band of water (1363 nm-1366 nm). The instrument has been optimized for observations from NASA's ER-2 aircraft as a proof-of-concept for a future low earth orbit satellite deployment. A robust model has been developed to simulate SHOW ER-2 limb measurements and retrievals. This paper presents the simulation of the SHOW ER-2 limb measurements along a hypothetical flight track and examines the sensitivity of the measurement and retrieval approach. Water vapour fields from an Environment and Climate Change Canada forecast model are used to represent realistic spatial variability along the flight path. High spectral resolution limb scattered radiances are simulated using the SASKTRAN radiative transfer model. It is shown that the SHOW instrument onboard the ER-2 is capable of resolving the water vapour variability in the UTLS from approximately 12 km - 18 km with ±1 ppm accuracy. Vertical resolutions between 500 m and 1 km are feasible. The along track sampling capability of the instrument is also discussed.

  15. Function modeling improves the efficiency of spatial modeling using big data from remote sensing

    Science.gov (United States)

    John Hogland; Nathaniel Anderson

    2017-01-01

    Spatial modeling is an integral component of most geographic information systems (GISs). However, conventional GIS modeling techniques can require substantial processing time and storage space and have limited statistical and machine learning functionality. To address these limitations, many have parallelized spatial models using multiple coding libraries and have...

  16. An efficient soil water balance model based on hybrid numerical and statistical methods

    Science.gov (United States)

    Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei

    2018-04-01

    Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new

  17. Novel, fast and efficient image-based 3D modeling method and its application in fracture risk evaluation.

    Science.gov (United States)

    Li, Dan; Xiao, Zhitao; Wang, Gang; Zhao, Guoqing

    2014-06-01

    Constructing models based on computed tomography images for finite element analysis (FEA) is challenging under pathological conditions. In the present study, an innovative method was introduced that uses Siemens syngo ® 3D software for processing models and Mimics software for further modeling. Compared with the slice-by-slice traditional manual margin discrimination, the new 3D modeling method utilizes automatic tissue margin determination and 3D cutting using syngo software. The modeling morphologies of the two methods were similar; however, the 3D modeling method was 8-10 times faster than the traditional method, particularly in cases with osteoporosis and osteophytes. A comparative FEA study of the lumbar spines of young and elderly patients, on the basis of the models constructed by the 3D modeling method, showed peak stress elevation in the vertebrae of elderly patients. Stress distribution was homogeneous in the entire vertebrae of young individuals. By contrast, stress redistribution in the vertebrae of the elderly was concentrated in the anterior cortex of the vertebrae, which explains the high fracture risk mechanism in elderly individuals. In summary, the new 3D modeling method is highly efficient, accurate and faster than traditional methods. The method also allows reliable FEA in pathological cases with osteoporosis and osteophytes.

  18. Modelling the effects of transport policy levers on fuel efficiency and national fuel consumption

    International Nuclear Information System (INIS)

    Kirby, H.R.; Hutton, B.; McQuaid, R.W.; Napier Univ., Edinburgh; Raeside, R.; Napier Univ., Edinburgh; Zhang, Xiayoan; Napier Univ., Edinburgh

    2000-01-01

    The paper provides an overview of the main features of a Vehicle Market Model (VMM) which estimates changes to vehicle stock/kilometrage, fuel consumed and CO 2 emitted. It is disaggregated into four basic vehicle types. The model includes: the trends in fuel consumption of new cars, including the role of fuel price: a sub-model to estimate the fuel consumption of vehicles on roads characterised by user-defined driving cycle regimes; procedures that reflect distribution of traffic across different area/road types; and the ability to vary the speed (or driving cycle) from one year to another, or as a result of traffic growth. The most significant variable influencing fuel consumption of vehicles was consumption in the previous year, followed by dummy variables related to engine size. the time trend (a proxy for technological improvements), and then fuel price. Indeed the effect of fuel price on car fuel efficiency was observed to be insignificant (at the 95% level) in two of the three versions of the model, and the size of fuel price term was also the smallest. This suggests that the effectiveness of using fuel prices as a direct policy tool to reduce fuel consumption may he limited. Fuel prices may have significant indirect impacts (such as influencing people to purchase more fuel efficient cars and vehicle manufacturers to invest in developing fuel efficient technology) as may other factors such as the threat of legislation. (Author)

  19. Modeling Energy Efficiency As A Green Logistics Component In Vehicle Assembly Line

    Science.gov (United States)

    Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer

    2016-11-01

    This paper uses System Dynamics (SD) simulation to investigate the concept green logistics in terms of energy efficiency in automotive industry. The car manufacturing industry is considered to be one of the highest energy consuming industries. An efficient decision making model is proposed that capture the impacts of strategic decisions on energy consumption and environmental sustainability. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. SD is the chosen simulation method and the main green logistics issues considered are Carbon Dioxide (CO2) emission and energy utilization. The model will assist decision makers acquire an in-depth understanding of relationship between high level planning and low level operation activities on production, environmental impacts and costs associated. The results of the SD model signify the existence of positive trade-offs between green practices of energy efficiency and the reduction of CO2 emission.

  20. Improving the Efficiency of Medical Services Systems: A New Integrated Mathematical Modeling Approach

    Directory of Open Access Journals (Sweden)

    Davood Shishebori

    2013-01-01

    Full Text Available Nowadays, the efficient design of medical service systems plays a critical role in improving the performance and efficiency of medical services provided by governments. Accordingly, health care planners in countries especially with a system based on a National Health Service (NHS try to make decisions on where to locate and how to organize medical services regarding several conditions in different residence areas, so as to improve the geographic equity of comfortable access in the delivery of medical services while accounting for efficiency and cost issues especially in crucial situations. Therefore, optimally locating of such services and also suitable allocating demands them, can help to enhance the performance and responsiveness of medical services system. In this paper, a multiobjective mixed integer nonlinear programming model is proposed to decide locations of new medical system centers, link roads that should be constructed or improved, and also urban residence centers covered by these medical service centers and link roads under investment budget constraint in order to both minimize the total transportation cost of the overall system and minimize the total failure cost (i.e., maximize the system reliability of medical service centers under unforeseen situations. Then, the proposed model is linearized by suitable techniques. Moreover, a practical case study is presented in detail to illustrate the application of the proposed mathematical model. Finally, a sensitivity analysis is done to provide an insight into the behavior of the proposed model in response to changes of key parameters of the problem.

  1. 7Be and hydrological model for more efficient implementation of erosion control measure

    Science.gov (United States)

    Al-Barri, Bashar; Bode, Samuel; Blake, William; Ryken, Nick; Cornelis, Wim; Boeckx, Pascal

    2014-05-01

    Increased concern about the on-site and off-site impacts of soil erosion in agricultural and forested areas has endorsed interest in innovative methods to assess in an unbiased way spatial and temporal soil erosion rates and redistribution patterns. Hence, interest in precisely estimating the magnitude of the problem and therefore applying erosion control measures (ECM) more efficiently. The latest generation of physically-based hydrological models, which fully couple overland flow and subsurface flow in three dimensions, permit implementing ECM in small and large scales more effectively if coupled with a sediment transport algorithm. While many studies focused on integrating empirical or numerical models based on traditional erosion budget measurements into 3D hydrological models, few studies evaluated the efficiency of ECM on watershed scale and very little attention is given to the potentials of environmental Fallout Radio-Nuclides (FRNs) in such applications. The use of FRN tracer 7Be in soil erosion/deposition research proved to overcome many (if not all) of the problems associated with the conventional approaches providing reliable data for efficient land use management. This poster will underline the pros and cones of using conventional methods and 7Be tracers to evaluate the efficiency of coconuts dams installed as ECM in experimental field in Belgium. It will also outline the potentials of 7Be in providing valuable inputs for evolving the numerical sediment transport algorithm needed for the hydrological model on field scale leading to assess the possibility of using this short-lived tracer as a validation tool for the upgraded hydrological model on watershed scale in further steps. Keywords: FRN, erosion control measures, hydrological modes

  2. Measuring China’s regional energy and carbon emission efficiency with DEA models: A survey

    International Nuclear Information System (INIS)

    Meng, Fanyi; Su, Bin; Thomson, Elspeth; Zhou, Dequn; Zhou, P.

    2016-01-01

    Highlights: • China’s regional efficiency studies using data envelopment analysis are reviewed. • The main features of 46 studies published in 2006–2015 are summarized. • Six models are compared from the perspective of methodology and empirical results. • Empirical study of China’s 30 regional efficiency assessment in 1995–2012 is presented. - Abstract: The use of data envelopment analysis (DEA) in China’s regional energy efficiency and carbon emission efficiency (EE&CE) assessment has received increasing attention in recent years. This paper conducted a comprehensive survey of empirical studies published in 2006–2015 on China’s regional EE&CE assessment using DEA-type models. The main features used in previous studies were identified, and then the methodological framework for deriving the EE&CE indicators as well as six widely used DEA models were introduced. These DEA models were compared and applied to measure China’s regional EE&CE in 30 provinces/regions between 1995 and 2012. The empirical study indicates that China’s regional EE&CE remained stable in the 9th Five Year Plan (1996–2000), then decreased in the 10th Five Year Plan (2000–2005), and increased a bit in the 11th Five Year Plan (2006–2010). The east region of China had the highest EE&CE while the central area had the lowest. By way of conclusion, some useful points relating to model selection are summarized from both methodological and empirical aspects.

  3. Analytical Model of Waterflood Sweep Efficiency in Vertical Heterogeneous Reservoirs under Constant Pressure

    Directory of Open Access Journals (Sweden)

    Lisha Zhao

    2016-01-01

    Full Text Available An analytical model has been developed for quantitative evaluation of vertical sweep efficiency based on heterogeneous multilayer reservoirs. By applying the Buckley-Leverett displacement mechanism, a theoretical relationship is deduced to describe dynamic changes of the front of water injection, water saturation of producing well, and swept volume during waterflooding under the condition of constant pressure, which substitutes for the condition of constant rate in the traditional way. Then, this method of calculating sweep efficiency is applied from single layer to multilayers, which can be used to accurately calculate the sweep efficiency of heterogeneous reservoirs and evaluate the degree of waterflooding in multilayer reservoirs. In the case study, the water frontal position, water cut, volumetric sweep efficiency, and oil recovery are compared between commingled injection and zonal injection by applying the derived equations. The results are verified by numerical simulators, respectively. It is shown that zonal injection works better than commingled injection in respect of sweep efficiency and oil recovery and has a longer period of water free production.

  4. Efficiency Evaluation in Secondary Schools: The Key Role of Model Specification and of "Ex Post" Analysis of Results.

    Science.gov (United States)

    Mancebon, Maria-Jesus; Bandres, Eduardo

    1999-01-01

    Evaluates efficiency of a sample of Spanish secondary schools, focusing on the measurement model's theoretical specification and the "ex post" analysis of results. Highlights characteristics that differentiate the most efficient schools from the least efficient. Stresses the importance of employing information supplied by both…

  5. A Robust Model Predictive Control for efficient thermal management of internal combustion engines

    International Nuclear Information System (INIS)

    Pizzonia, Francesco; Castiglione, Teresa; Bova, Sergio

    2016-01-01

    Highlights: • A Robust Model Predictive Control for ICE thermal management was developed. • The proposed control is effective in decreasing the warm-up time. • The control system reduces coolant flow rate under fully warmed conditions. • The control strategy operates the cooling system around onset of nucleate boiling. • Little on-line computational effort is required. - Abstract: Optimal thermal management of modern internal combustion engines (ICE) is one of the key factors for reducing fuel consumption and CO 2 emissions. These are measured by using standardized driving cycles, like the New European Driving Cycle (NEDC), during which the engine does not reach thermal steady state; engine efficiency and emissions are therefore penalized. Several techniques for improving ICE thermal efficiency were proposed, which range from the use of empirical look-up tables to pulsed pump operation. A systematic approach to the problem is however still missing and this paper aims to bridge this gap. The paper proposes a Robust Model Predictive Control of the coolant flow rate, which makes use of a zero-dimensional model of the cooling system of an ICE. The control methodology incorporates explicitly the model uncertainties and achieves the synthesis of a state-feedback control law that minimizes the “worst case” objective function while taking into account the system constraints, as proposed by Kothare et al. (1996). The proposed control strategy is to adjust the coolant flow rate by means of an electric pump, in order to bring the cooling system to operate around the onset of nucleate boiling: across it during warm-up and above it (nucleate or saturated boiling) under fully warmed conditions. The computationally heavy optimization is carried out off-line, while during the operation of the engine the control parameters are simply picked-up on-line from look-up tables. Owing to the little computational effort required, the resulting control strategy is suitable for

  6. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  7. A validated age-related normative model for male total testosterone shows increasing variance but no decline after age 40 years.

    Science.gov (United States)

    Kelsey, Thomas W; Li, Lucy Q; Mitchell, Rod T; Whelan, Ashley; Anderson, Richard A; Wallace, W Hamish B

    2014-01-01

    The diagnosis of hypogonadism in human males includes identification of low serum testosterone levels, and hence there is an underlying assumption that normal ranges of testosterone for the healthy population are known for all ages. However, to our knowledge, no such reference model exists in the literature, and hence the availability of an applicable biochemical reference range would be helpful for the clinical assessment of hypogonadal men. In this study, using model selection and validation analysis of data identified and extracted from thirteen studies, we derive and validate a normative model of total testosterone across the lifespan in healthy men. We show that total testosterone peaks [mean (2.5-97.5 percentile)] at 15.4 (7.2-31.1) nmol/L at an average age of 19 years, and falls in the average case [mean (2.5-97.5 percentile)] to 13.0 (6.6-25.3) nmol/L by age 40 years, but we find no evidence for a further fall in mean total testosterone with increasing age through to old age. However we do show that there is an increased variation in total testosterone levels with advancing age after age 40 years. This model provides the age related reference ranges needed to support research and clinical decision making in males who have symptoms that may be due to hypogonadism.

  8. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  9. Modelling household responses to energy efficiency interventions via system dynamics and survey data

    Directory of Open Access Journals (Sweden)

    S Davis

    2010-12-01

    Full Text Available An application of building a system dynamics model of the way households might respond to interventions aimed at reducing energy consumption (specifically the use of electricity is described in this paper. A literature review of past research is used to build an initial integrated model of household consumption, and this model is used to generate a small number of research hypotheses about how households possessing different characteristics might react to various types of interventions. These hypotheses are tested using data gathered from an efficiency intervention conducted in a town in the South African Western Cape in which households were able to exchange regular light bulbs for more efficient compact fluorescent lamp light bulbs. Our experiences are (a that a system dynamics approach proved useful in advancing a non-traditional point of view for which, for historical and economic reasons, data were not abundantly available; (b that, in areas where traditional models are heavily quantitative, some scepticism to a system dynamics model may be expected; and (c that a statistical comparison of model results by means of empirical data may be an effective tool in reducing such scepticism.

  10. Efficient 3D frequency response modeling with spectral accuracy by the rapid expansion method

    KAUST Repository

    Chu, Chunlei

    2012-07-01

    Frequency responses of seismic wave propagation can be obtained either by directly solving the frequency domain wave equations or by transforming the time domain wavefields using the Fourier transform. The former approach requires solving systems of linear equations, which becomes progressively difficult to tackle for larger scale models and for higher frequency components. On the contrary, the latter approach can be efficiently implemented using explicit time integration methods in conjunction with running summations as the computation progresses. Commonly used explicit time integration methods correspond to the truncated Taylor series approximations that can cause significant errors for large time steps. The rapid expansion method (REM) uses the Chebyshev expansion and offers an optimal solution to the second-order-in-time wave equations. When applying the Fourier transform to the time domain wavefield solution computed by the REM, we can derive a frequency response modeling formula that has the same form as the original time domain REM equation but with different summation coefficients. In particular, the summation coefficients for the frequency response modeling formula corresponds to the Fourier transform of those for the time domain modeling equation. As a result, we can directly compute frequency responses from the Chebyshev expansion polynomials rather than the time domain wavefield snapshots as do other time domain frequency response modeling methods. When combined with the pseudospectral method in space, this new frequency response modeling method can produce spectrally accurate results with high efficiency. © 2012 Society of Exploration Geophysicists.

  11. A DC-DC Converter Efficiency Model for System Level Analysis in Ultra Low Power Applications

    Directory of Open Access Journals (Sweden)

    Benton H. Calhoun

    2013-06-01

    Full Text Available This paper presents a model of inductor based DC-DC converters that can be used to study the impact of power management techniques such as dynamic voltage and frequency scaling (DVFS. System level power models of low power systems on chip (SoCs and power management strategies cannot be correctly established without accounting for the associated overhead related to the DC-DC converters that provide regulated power to the system. The proposed model accurately predicts the efficiency of inductor based DC-DC converters with varying topologies and control schemes across a range of output voltage and current loads. It also accounts for the energy and timing overhead associated with the change in the operating condition of the regulator. Since modern SoCs employ power management techniques that vary the voltage and current loads seen by the converter, accurate modeling of the impact on the converter efficiency becomes critical. We use this model to compute the overall cost of two power distribution strategies for a SoC with multiple voltage islands. The proposed model helps us to obtain the energy benefits of a power management technique and can also be used as a basis for comparison between power management techniques or as a tool for design space exploration early in a SoC design cycle.

  12. An efficient numerical progressive diagonalization scheme for the quantum Rabi model revisited

    International Nuclear Information System (INIS)

    Pan, Feng; Bao, Lina; Dai, Lianrong; Draayer, Jerry P

    2017-01-01

    An efficient numerical progressive diagonalization scheme for the quantum Rabi model is revisited. The advantage of the scheme lies in the fact that the quantum Rabi model can be solved almost exactly by using the scheme that only involves a finite set of one variable polynomial equations. The scheme is especially efficient for a specified eigenstate of the model, for example, the ground state. Some low-lying level energies of the model for several sets of parameters are calculated, of which one set of the results is compared to that obtained from the Braak’s exact solution proposed recently. It is shown that the derivative of the entanglement measure defined in terms of the reduced von Neumann entropy with respect to the coupling parameter does reach the maximum near the critical point deduced from the classical limit of the Dicke model, which may provide a probe of the critical point of the crossover in finite quantum many-body systems, such as that in the quantum Rabi model. (paper)

  13. Energy-Efficiency Retrofits in Small-Scale Multifamily Rental Housing: A Business Model

    Science.gov (United States)

    DeChambeau, Brian

    The goal of this thesis to develop a real estate investment model that creates a financial incentive for property owners to perform energy efficiency retrofits in small multifamily rental housing in southern New England. The medium for this argument is a business plan that is backed by a review of the literature and input from industry experts. In addition to industry expertise, the research covers four main areas: the context of green building, efficient building technologies, precedent programs, and the Providence, RI real estate market for the business plan. The thesis concludes that the model proposed can improve the profitability of real estate investment in small multifamily rental properties, though the extent to which this is possible depends partially on utility-run incentive programs and the capital available to invest in retrofit measures.

  14. Energy efficiency and renewable energy modeling with ETSAP TIAM - challenges, opportunities, and solutions

    DEFF Research Database (Denmark)

    Gregg, Jay Sterling; Balyk, Olexandr; Pérez, Cristian Hernán Cabrera

    The objectives of the Sustainable Energy for All (SE4ALL), a United Nations (UN) global initiative, are to achieve, by 2030: 1) universal access to modern energy services; 2) a doubling of the global rate of improvement in energy efficiency; and 3) a doubling of the share of renewable energy...... in the global energy mix (United Nations, 2011; SE4ALL, 2013a). The purpose of this study is to determine to what extent the energy efficiency objective supports the other two objectives, and to what extent the SE4ALL objectives support the climate target of limiting the global mean temperature increase to 2° C...... over pre-industrial times. To accomplish this, pathways are constructed for each objective, which then form the basis for a scenario analysis using the Energy Technology System Analysis Program TIMES Integrated Assessment Model (ETSAP-TIAM). This presentation focuses on the modeling challenges...

  15. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav; Vošvrda, Miloslav

    2017-01-01

    Roč. 54, č. 1 (2017), s. 148-155 ISSN 1007-5704 R&D Projects: GA ČR(CZ) GBP402/12/G097 EU Projects: European Commission(XE) 612955 - FINMAP Institutional support: RVO:67985556 Keywords : Ising model * Efficient market hypothesis * Monte Carlo simulation Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 2.784, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kristoufek-0474986.pdf

  16. Probabilities and energies to obtain the counting efficiency of electron-capture nuclides, KLMN model

    International Nuclear Information System (INIS)

    Casas Galiano, G.; Grau Malonda, A.

    1994-01-01

    An intelligent computer program has been developed to obtain the mathematical formulae to compute the probabilities and reduced energies of the different atomic rearrangement pathways following electron-capture decay. Creation and annihilation operators for Auger and X processes have been introduced. Taking into account the symmetries associated with each process, 262 different pathways were obtained. This model allows us to obtain the influence of the M-electron-capture in the counting efficiency when the atomic number of the nuclide is high

  17. An Efficient Modeling and Simulation of Quantum Key Distribution Protocols Using OptiSystem™

    OpenAIRE

    Abudhahir Buhari,; Zuriati Ahmad Zukarnain; Shamla K. Subramaniam,; Hishamuddin Zainuddin; Suhairi Saharudin

    2012-01-01

    In this paper, we propose a modeling and simulation framework for quantum key distribution protocols using commercial photonic simulator OptiSystem™. This simulation fram