WorldWideScience

Sample records for modeling shows efficiencies

  1. Skeletal Muscle Differentiation on a Chip Shows Human Donor Mesoangioblasts' Efficiency in Restoring Dystrophin in a Duchenne Muscular Dystrophy Model.

    Science.gov (United States)

    Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F Saverio; Cossu, Giulio; Elvassore, Nicola

    2016-12-01

    : Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210-240 µm vs. 40-70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from

  2. Bioavailability of particulate metal to zebra mussels: Biodynamic modelling shows that assimilation efficiencies are site-specific

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)

    2011-12-15

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.

  3. Bioavailability of particulate metal to zebra mussels: biodynamic modelling shows that assimilation efficiencies are site-specific.

    Science.gov (United States)

    Bourgeault, Adeline; Gourlay-Francé, Catherine; Priadi, Cindy; Ayrault, Sophie; Tusseau-Vuillemin, Marie-Hélène

    2011-12-01

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Novel H7N9 influenza virus shows low infectious dose, high growth rate, and efficient contact transmission in the guinea pig model.

    Science.gov (United States)

    Gabbard, Jon D; Dlugolenski, Daniel; Van Riel, Debby; Marshall, Nicolle; Galloway, Summer E; Howerth, Elizabeth W; Campbell, Patricia J; Jones, Cheryl; Johnson, Scott; Byrd-Leotis, Lauren; Steinhauer, David A; Kuiken, Thijs; Tompkins, S Mark; Tripp, Ralph; Lowen, Anice C; Steel, John

    2014-02-01

    The zoonotic outbreak of H7N9 subtype avian influenza virus that occurred in eastern China in the spring of 2013 resulted in 135 confirmed human cases, 44 of which were lethal. Sequencing of the viral genome revealed a number of molecular signatures associated with virulence or transmission in mammals. We report here that, in the guinea pig model, a human isolate of novel H7N9 influenza virus, A/Anhui/1/2013 (An/13), is highly dissimilar to an H7N1 avian isolate and instead behaves similarly to a human seasonal strain in several respects. An/13 was found to have a low 50% infectious dose, grow to high titers in the upper respiratory tract, and transmit efficiently among cocaged guinea pigs. The pH of fusion of the hemagglutinin (HA) and the binding of virus to fixed guinea pig tissues were also examined. The An/13 HA displayed a relatively elevated pH of fusion characteristic of many avian strains, and An/13 resembled avian viruses in terms of attachment to tissues. One important difference was seen between An/13 and both the H3N2 human and the H7N1 avian viruses: when inoculated intranasally at a high dose, only the An/13 virus led to productive infection of the lower respiratory tract of guinea pigs. In sum, An/13 was found to retain fusion and attachment properties of an avian influenza virus but displayed robust growth and contact transmission in the guinea pig model atypical of avian strains and indicative of mammalian adaptation.

  5. Novel H7N9 Influenza Virus Shows Low Infectious Dose, High Growth Rate, and Efficient Contact Transmission in the Guinea Pig Model

    Science.gov (United States)

    Gabbard, Jon D.; Dlugolenski, Daniel; Van Riel, Debby; Marshall, Nicolle; Galloway, Summer E.; Howerth, Elizabeth W.; Campbell, Patricia J.; Jones, Cheryl; Johnson, Scott; Byrd-Leotis, Lauren; Steinhauer, David A.; Kuiken, Thijs; Tompkins, S. Mark; Tripp, Ralph; Lowen, Anice C.

    2014-01-01

    The zoonotic outbreak of H7N9 subtype avian influenza virus that occurred in eastern China in the spring of 2013 resulted in 135 confirmed human cases, 44 of which were lethal. Sequencing of the viral genome revealed a number of molecular signatures associated with virulence or transmission in mammals. We report here that, in the guinea pig model, a human isolate of novel H7N9 influenza virus, A/Anhui/1/2013 (An/13), is highly dissimilar to an H7N1 avian isolate and instead behaves similarly to a human seasonal strain in several respects. An/13 was found to have a low 50% infectious dose, grow to high titers in the upper respiratory tract, and transmit efficiently among cocaged guinea pigs. The pH of fusion of the hemagglutinin (HA) and the binding of virus to fixed guinea pig tissues were also examined. The An/13 HA displayed a relatively elevated pH of fusion characteristic of many avian strains, and An/13 resembled avian viruses in terms of attachment to tissues. One important difference was seen between An/13 and both the H3N2 human and the H7N1 avian viruses: when inoculated intranasally at a high dose, only the An/13 virus led to productive infection of the lower respiratory tract of guinea pigs. In sum, An/13 was found to retain fusion and attachment properties of an avian influenza virus but displayed robust growth and contact transmission in the guinea pig model atypical of avian strains and indicative of mammalian adaptation. PMID:24227867

  6. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  7. Duchenne muscular dystrophy models show their age

    OpenAIRE

    Chamberlain, Jeffrey S.

    2010-01-01

    The lack of appropriate animal models has hampered efforts to develop therapies for Duchenne muscular dystrophy (DMD). A new mouse model lacking both dystrophin and telomerase (Sacco et al., 2010) closely mimics the pathological progression of human DMD and shows that muscle stem cell activity is a key determinant of disease severity.

  8. Efficient polarimetric BRDF model.

    Science.gov (United States)

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  9. Efficiency model of Russian banks

    OpenAIRE

    Pavlyuk, Dmitry

    2006-01-01

    The article deals with problems related to the stochastic frontier model of bank efficiency measurement. The model is used to study the efficiency of the banking sector of The Russian Federation. It is based on the stochastic approach both to the efficiency frontier location and to individual bank efficiency values. The model allows estimating bank efficiency values, finding relations with different macro- and microeconomic factors and testing some economic hypotheses.

  10. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  11. Modeling of venturi scrubber efficiency

    Science.gov (United States)

    Crowder, Jerry W.; Noll, Kenneth E.; Davis, Wayne T.

    The parameters affecting venturi scrubber performance have been rationally examined and modifications to the current modeling theory have been developed. The modified model has been validated with available experimental data for a range of throat gas velocities, liquid-to-gas ratios and particle diameters and is used to study the effect of some design parameters on collection efficiency. Most striking among the observations is the prediction of a new design parameter termed the minimum contactor length. Also noted is the prediction of little effect on collection efficiency with increasing liquid-to-gas ratio above about 2ℓ m-3. Indeed, for some cases a decrease in collection efficiency is predicted for liquid rates above this value.

  12. Gut Transcriptome Analysis Shows Different Food Utilization Efficiency by the Grasshopper Oedaleous asiaticus (Orthoptera: Acrididae).

    Science.gov (United States)

    Huang, Xunbing; McNeill, Mark Richard; Ma, Jingchuan; Qin, Xinghu; Tu, Xiongbing; Cao, Guangchun; Wang, Guangjun; Nong, Xiangqun; Zhang, Zehua

    2017-08-01

    Oedaleus asiaticus B. Bienko is a persistent pest occurring in north Asian grasslands. We found that O. asiaticus feeding on Stipa krylovii Roshev. had higher approximate digestibility (AD), efficiency of conversion of ingested food (ECI), and efficiency of conversion of digested food (ECD), compared with cohorts feeding on Leymus chinensis (Trin.) Tzvel, Artemisia frigida Willd., or Cleistogenes squarrosa (Trin.) Keng. Although this indicated high food utilization efficiency for S. krylovii, the physiological processes and molecular mechanisms underlying these biological observations are not well understood. Transcriptome analysis was used to examine how gene expression levels in O. asiaticus gut are altered by feeding on the four plant species. Nymphs (fifth-instar female) that fed on S. krylovii had the largest variation in gene expression profiles, with a total of 88 genes significantly upregulated compared with those feeding on the other three plants, mainly including nutrition digestive genes of protein, carbohydrate, and lipid digestion. GO and KEGG enrichment also showed that feeding S. krylovii could upregulate the nutrition digestion-related molecular function, biological process, and pathways. These changes in transcripts levels indicate that the physiological processes of activating nutrition digestive enzymes and metabolism pathways can well explain the high food utilization of S. krylovii by O. asiaticus. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  14. Efficiency of economic development models

    Directory of Open Access Journals (Sweden)

    Oana Camelia Iacob

    2013-03-01

    Full Text Available The world economy is becoming increasingly integrated. Integrating emerging economies of Asia, such as China and India increase competition on the world stage, putting pressure on the "actors" already existing. These developments have raised questions about the effectiveness of European development model, which focuses on a high level of equity, insurance and social protection. According to analysts, the world today faces three models of economic development with significant weight in the world: the European, American and Asian. This study will focus on analyzing European development model, and a brief comparison with the United States. In addition, this study aims to highlight the relationship between efficiency and social equity that occurs in each submodel in part of the European model, given that social and economic performance in the EU are not homogeneous. To achieve this, it is necessary to analyze different indicators related to social equity and efficiency respectively, to observe the performance of each submodel individually. The article analyzes data to determine submodel performance according to social equity and economic efficiency.

  15. Model shows future cut in U.S. ozone levels

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    A joint U.S. auto-oil industry research program says modeling shows that changing gasoline composition can reduce ozone levels for Los Angeles in 2010 and for New York City and Dallas-Fort Worth in 2005. The air quality modeling was based on vehicle emissions research data released late last year (OGJ, Dec. 24, 1990, p. 20). The effort is sponsored by the big three auto manufacturers and 14 oil companies. Sponsors the cars and small trucks account for about one third of ozone generated in the three cities studied but by 2005-10 will account for only 5-9%

  16. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    Science.gov (United States)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  17. System with embedded drug release and nanoparticle degradation sensor showing efficient rifampicin delivery into macrophages.

    Science.gov (United States)

    Trousil, Jiří; Filippov, Sergey K; Hrubý, Martin; Mazel, Tomáš; Syrová, Zdeňka; Cmarko, Dušan; Svidenská, Silvie; Matějková, Jana; Kováčik, Lubomír; Porsch, Bedřich; Konefał, Rafał; Lund, Reidar; Nyström, Bo; Raška, Ivan; Štěpánek, Petr

    2017-01-01

    We have developed a biodegradable, biocompatible system for the delivery of the antituberculotic antibiotic rifampicin with a built-in drug release and nanoparticle degradation fluorescence sensor. Polymer nanoparticles based on poly(ethylene oxide) monomethyl ether-block-poly(ε-caprolactone) were noncovalently loaded with rifampicin, a combination that, to best of our knowledge, was not previously described in the literature, which showed significant benefits. The nanoparticles contain a Förster resonance energy transfer (FRET) system that allows real-time assessment of drug release not only in vitro, but also in living macrophages where the mycobacteria typically reside as hard-to-kill intracellular parasites. The fluorophore also enables in situ monitoring of the enzymatic nanoparticle degradation in the macrophages. We show that the nanoparticles are efficiently taken up by macrophages, where they are very quickly associated with the lysosomal compartment. After drug release, the nanoparticles in the cmacrophages are enzymatically degraded, with half-life 88±11 min. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Showing that the race model inequality is not violated

    DEFF Research Database (Denmark)

    Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul

    2012-01-01

    important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...

  19. Magnetism reflectometer study shows LiF layers improve efficiency in spin valve devices

    Energy Technology Data Exchange (ETDEWEB)

    Bardoel, Agatha A [ORNL; Lauter, Valeria [ORNL; Szulczewski, Greg J [ORNL

    2012-01-01

    magnetic layer through the organic semiconductor in the spin valve and enhancing the overall properties of the system. In related work the magnetic properties of the cobalt film and the permalloy Ni{sub 80}Fe{sub 20} were characterized. Cobalt in particular needed attention, as it cannot be grown epitaxially (i.e., deposited) on an organic semiconductor film. Cobalt becomes polycrystalline or amorphous, and this affects its magnetic properties. The data from the first experiment showed that the cobalt layer in the system 'did not have typical magnetic properties,' Lauter said. 'The results showed that the cobalt had low magnetization. To improve the efficiency, the cobalt magnetization should be much higher. So this experiment helped us to improve the growth conditions and to get a cobalt layer with better magnetic properties.' In a subsequent experiment the researchers increased the magnetization of the cobalt, and a follow-up paper is in progress.

  20. Electrotransfection and lipofection show comparable efficiency for in vitro gene delivery of primary human myoblasts.

    Science.gov (United States)

    Mars, Tomaz; Strazisar, Marusa; Mis, Katarina; Kotnik, Nejc; Pegan, Katarina; Lojk, Jasna; Grubic, Zoran; Pavlin, Mojca

    2015-04-01

    Transfection of primary human myoblasts offers the possibility to study mechanisms that are important for muscle regeneration and gene therapy of muscle disease. Cultured human myoblasts were selected here because muscle cells still proliferate at this developmental stage, which might have several advantages in gene therapy. Gene therapy is one of the most sought-after tools in modern medicine. Its progress is, however, limited due to the lack of suitable gene transfer techniques. To obtain better insight into the transfection potential of the presently used techniques, two non-viral transfection methods--lipofection and electroporation--were compared. The parameters that can influence transfection efficiency and cell viability were systematically approached and compared. Cultured myoblasts were transfected with the pEGFP-N1 plasmid either using Lipofectamine 2000 or with electroporation. Various combinations for the preparation of the lipoplexes and the electroporation media, and for the pulsing protocols, were tested and compared. Transfection efficiency and cell viability were inversely proportional for both approaches. The appropriate ratio of Lipofectamine and plasmid DNA provides optimal conditions for lipofection, while for electroporation, RPMI medium and a pulsing protocol using eight pulses of 2 ms at E = 0.8 kV/cm proved to be the optimal combination. The transfection efficiencies for the optimal lipofection and optimal electrotransfection protocols were similar (32 vs. 32.5%, respectively). Both of these methods are effective for transfection of primary human myoblasts; however, electroporation might be advantageous for in vivo application to skeletal muscle.

  1. Modelling water uptake efficiency of root systems

    Science.gov (United States)

    Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea

    2016-04-01

    Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow

  2. Finishing pigs that are divergent in feed efficiency show small differences in intestinal functionality and structure.

    Directory of Open Access Journals (Sweden)

    Barbara U Metzler-Zebeli

    Full Text Available Controversial information is available regarding the feed efficiency-related variation in intestinal size, structure and functionality in pigs. The present objective was therefore to investigate the differences in visceral organ size, intestinal morphology, mucosal enzyme activity, intestinal integrity and related gene expression in low and high RFI pigs which were reared at three different geographical locations (Austria, AT; Northern Ireland, NI; Republic of Ireland, ROI using similar protocols. Pigs (n = 369 were ranked for their RFI between days 42 and 91 postweaning and low and high RFI pigs (n = 16 from AT, n = 24 from NI, and n = 60 from ROI were selected. Pigs were sacrificed and sampled on ~day 110 of life. In general, RFI-related variation in intestinal size, structure and function was small. Some energy saving mechanisms and enhanced digestive and absorptive capacity were indicated in low versus high RFI pigs by shorter crypts, higher duodenal lactase and maltase activity and greater mucosal permeability (P < 0.05, but differences were mainly seen in pigs from AT and to a lesser degree in pigs from ROI. Additionally, low RFI pigs from AT had more goblet cells in duodenum but fewer in jejunum compared to high RFI pigs (P < 0.05. Together with the lower expression of TLR4 and TNFA in low versus high RFI pigs from AT and ROI (P < 0.05, these results might indicate differences in the innate immune response between low and high RFI pigs. Results demonstrated that the variation in the size of visceral organs and intestinal structure and functionality was greater between geographic location (local environmental factors than between RFI ranks of pigs. In conclusion, present results support previous findings that the intestinal size, structure and functionality do not significantly contribute to variation in RFI of pigs.

  3. Ob/ob mouse livers show decreased oxidative phosphorylation efficiencies and anaerobic capacities after cold ischemia.

    Directory of Open Access Journals (Sweden)

    Michael J J Chu

    Full Text Available BACKGROUND: Hepatic steatosis is a major risk factor for graft failure in liver transplantation. Hepatic steatosis shows a greater negative influence on graft function following prolonged cold ischaemia. As the impact of steatosis on hepatocyte metabolism during extended cold ischaemia is not well-described, we compared markers of metabolic capacity and mitochondrial function in steatotic and lean livers following clinically relevant durations of cold preservation. METHODS: Livers from 10-week old leptin-deficient obese (ob/ob, n = 9 and lean C57 mice (n = 9 were preserved in ice-cold University of Wisconsin solution. Liver mitochondrial function was then assessed using high resolution respirometry after 1.5, 3, 5, 8, 12, 16 and 24 hours of storage. Metabolic marker enzymes for anaerobiosis and mitochondrial mass were also measured in conjunction with non-bicarbonate tissue pH buffering capacity. RESULTS: Ob/ob and lean mice livers showed severe (>60% macrovesicular and mild (<30% microvesicular steatosis on Oil Red O staining, respectively. Ob/ob livers had lower baseline enzymatic complex I activity but similar adenosine triphosphate (ATP levels compared to lean livers. During cold storage, the respiratory control ratio and complex I-fueled phosphorylation deteriorated approximately twice as fast in ob/ob livers compared to lean livers. Ob/ob livers also demonstrated decreased ATP production capacities at all time-points analyzed compared to lean livers. Ob/ob liver baseline lactate dehydrogenase activities and intrinsic non-bicarbonate buffering capacities were depressed by 60% and 40%, respectively compared to lean livers. CONCLUSIONS: Steatotic livers have impaired baseline aerobic and anaerobic capacities compared to lean livers, and mitochondrial function indices decrease particularly from after 5 hours of cold preservation. These data provide a mechanistic basis for the clinical recommendation of shorter cold storage durations in

  4. Chiral crystal of a C2v-symmetric 1,3-diazaaulene derivative showing efficient optical second harmonic generation

    KAUST Repository

    Ma, Xiaohua; Fu, Limin; Zhao, Yunfeng; Ai, Xicheng; Zhang, Jianping; Han, Yu; Guo, Zhixin

    2011-01-01

    the moderate static first hyperpolarizabilities (β0) for both APNA [(136 ± 5) à - 10-30 esu] and DPAPNA [(263 ± 20) à - 10-30 esu], only APNA crystal shows a powder efficiency of second harmonic generation (SHG) of 23 times that of urea. It is shown

  5. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  6. Multidimensional Balanced Efficiency Decision Model

    Directory of Open Access Journals (Sweden)

    Antonella Petrillo

    2015-10-01

    Full Text Available In this paper a multicriteria methodological approach, based on Balanced Scorecard (BSC and Analytic Network Process (ANP, is proposed to evaluate competitiveness performance in luxury sector. A set of specific key performance indicators (KPIs have been proposed. The contribution of our paper is to present the integration of two methodologies, BSC – a multiple perspective framework for performance assessment – and ANP – a decision-making tool to prioritize multiple performance perspectives and indicators and to generate a unified metric that incorporates diversified issues for conducting supply chain improvements. The BSC/ANP model is used to prioritize all performances within a luxury industry. A real case study is presented.

  7. BOREAS TE-17 Production Efficiency Model Images

    Data.gov (United States)

    National Aeronautics and Space Administration — A BOREAS version of the Global Production Efficiency Model(www.inform.umd.edu/glopem) was developed by TE-17 to generate maps of gross and net primary production,...

  8. Statistical modelling for ship propulsion efficiency

    DEFF Research Database (Denmark)

    Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole

    2012-01-01

    This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...

  9. Efficient Modelling and Generation of Markov Automata

    NARCIS (Netherlands)

    Koutny, M.; Timmer, Mark; Ulidowski, I.; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  10. Geometrical efficiency in computerized tomography: generalized model

    International Nuclear Information System (INIS)

    Costa, P.R.; Robilotta, C.C.

    1992-01-01

    A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)

  11. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments

    Directory of Open Access Journals (Sweden)

    Gyöngyi Munkácsy

    2016-01-01

    Full Text Available No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal–Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E−06. Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E−04. There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  12. Modeling the Efficiency of a Germanium Detector

    Science.gov (United States)

    Hayton, Keith; Prewitt, Michelle; Quarles, C. A.

    2006-10-01

    We are using the Monte Carlo Program PENELOPE and the cylindrical geometry program PENCYL to develop a model of the detector efficiency of a planar Ge detector. The detector is used for x-ray measurements in an ongoing experiment to measure electron bremsstrahlung. While we are mainly interested in the efficiency up to 60 keV, the model ranges from 10.1 keV (below the Ge absorption edge at 11.1 keV) to 800 keV. Measurements of the detector efficiency have been made in a well-defined geometry with calibrated radioactive sources: Co-57, Se-75, Ba-133, Am-241 and Bi-207. The model is compared with the experimental measurements and is expected to provide a better interpolation formula for the detector efficiency than simply using x-ray absorption coefficients for the major constituents of the detector. Using PENELOPE, we will discuss several factors, such as Ge dead layer, surface ice layer and angular divergence of the source, that influence the efficiency of the detector.

  13. Modeling international trends in energy efficiency

    International Nuclear Information System (INIS)

    Stern, David I.

    2012-01-01

    I use a stochastic production frontier to model energy efficiency trends in 85 countries over a 37-year period. Differences in energy efficiency across countries are modeled as a stochastic function of explanatory variables and I estimate the model using the cross-section of time-averaged data, so that no structure is imposed on technological change over time. Energy efficiency is measured using a new energy distance function approach. The country using the least energy per unit output, given its mix of outputs and inputs, defines the global production frontier. A country's relative energy efficiency is given by its distance from the frontier—the ratio of its actual energy use to the minimum required energy use, ceteris paribus. Energy efficiency is higher in countries with, inter alia, higher total factor productivity, undervalued currencies, and smaller fossil fuel reserves and it converges over time across countries. Globally, technological change was the most important factor counteracting the energy-use and carbon-emissions increasing effects of economic growth.

  14. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  15. An efficient descriptor model for designing materials for solar cells

    Science.gov (United States)

    Alharbi, Fahhad H.; Rashkeev, Sergey N.; El-Mellouhi, Fedwa; Lüthi, Hans P.; Tabet, Nouar; Kais, Sabre

    2015-11-01

    An efficient descriptor model for fast screening of potential materials for solar cell applications is presented. It works for both excitonic and non-excitonic solar cells materials, and in addition to the energy gap it includes the absorption spectrum (α(E)) of the material. The charge transport properties of the explored materials are modelled using the characteristic diffusion length (Ld) determined for the respective family of compounds. The presented model surpasses the widely used Scharber model developed for bulk heterojunction solar cells. Using published experimental data, we show that the presented model is more accurate in predicting the achievable efficiencies. To model both excitonic and non-excitonic systems, two different sets of parameters are used to account for the different modes of operation. The analysis of the presented descriptor model clearly shows the benefit of including α(E) and Ld in view of improved screening results.

  16. Efficient Iris Localization via Optimization Model

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2017-01-01

    Full Text Available Iris localization is one of the most important processes in iris recognition. Because of different kinds of noises in iris image, the localization result may be wrong. Besides this, localization process is time-consuming. To solve these problems, this paper develops an efficient iris localization algorithm via optimization model. Firstly, the localization problem is modeled by an optimization model. Then SIFT feature is selected to represent the characteristic information of iris outer boundary and eyelid for localization. And SDM (Supervised Descent Method algorithm is employed to solve the final points of outer boundary and eyelids. Finally, IRLS (Iterative Reweighted Least-Square is used to obtain the parameters of outer boundary and upper and lower eyelids. Experimental result indicates that the proposed algorithm is efficient and effective.

  17. ACO model should encourage efficient care delivery.

    Science.gov (United States)

    Toussaint, John; Krueger, David; Shortell, Stephen M; Milstein, Arnold; Cutler, David M

    2015-09-01

    The independent Office of the Actuary for CMS certified that the Pioneer ACO model has met the stringent criteria for expansion to a larger population. Significant savings have accrued and quality targets have been met, so the program as a whole appears to be working. Ironically, 13 of the initial 32 enrollees have left. We attribute this to the design of the ACO models which inadequately support efficient care delivery. Using Bellin-ThedaCare Healthcare Partners as an example, we will focus on correctible flaws in four core elements of the ACO payment model: finance spending and targets, attribution, and quality performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Efficient Bayesian network modeling of systems

    International Nuclear Information System (INIS)

    Bensi, Michelle; Kiureghian, Armen Der; Straub, Daniel

    2013-01-01

    The Bayesian network (BN) is a convenient tool for probabilistic modeling of system performance, particularly when it is of interest to update the reliability of the system or its components in light of observed information. In this paper, BN structures for modeling the performance of systems that are defined in terms of their minimum link or cut sets are investigated. Standard BN structures that define the system node as a child of its constituent components or its minimum link/cut sets lead to converging structures, which are computationally disadvantageous and could severely hamper application of the BN to real systems. A systematic approach to defining an alternative formulation is developed that creates chain-like BN structures that are orders of magnitude more efficient, particularly in terms of computational memory demand. The formulation uses an integer optimization algorithm to identify the most efficient BN structure. Example applications demonstrate the proposed methodology and quantify the gained computational advantage

  19. Chiral crystal of a C2v-symmetric 1,3-diazaaulene derivative showing efficient optical second harmonic generation

    KAUST Repository

    Ma, Xiaohua

    2011-03-01

    Achiral nonlinear optical (NLO) chromophores 1,3-diazaazulene derivatives, 2-(4â€-aminophenyl)-6-nitro-1,3-diazaazulene (APNA) and 2-(4â€-N,N-diphenylaminophenyl)-6-nitro-1,3-diazaazulene (DPAPNA), were synthesized with high yield. Despite the moderate static first hyperpolarizabilities (β0) for both APNA [(136 ± 5) à - 10-30 esu] and DPAPNA [(263 ± 20) à - 10-30 esu], only APNA crystal shows a powder efficiency of second harmonic generation (SHG) of 23 times that of urea. It is shown that the APNA crystallization driven cooperatively by the strong H-bonding network and the dipolar electrostatic interactions falls into the noncentrosymmetric P2 12121 space group, and that the helical supramolecular assembly is solely responsible for the efficient SHG response. To the contrary, the DPAPNA crystal with centrosymmetric P-1 space group is packed with antiparalleling dimmers, and is therefore completely SHG-inactive. 1,3-Diazaazulene derivatives are suggested to be potent building blocks for SHG-active chiral crystals, which are advantageous in high thermal stability, excellent near-infrared transparency and high degree of designing flexibility. © 2011 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys, 2011 Optical crystals based on 1,3-diazaazulene derivatives are reported as the first example of organic nonlinear optical crystal whose second harmonic generation activity is found to originate solely from the chirality of their helical supramolecular orientation. The strong H-bond network forming between adjacent choromophores is found to act cooperatively with dipolar electrostatic interactions in driving the chiral crystallization of this material. Copyright © 2011 Wiley Periodicals, Inc.

  20. Structure model of energy efficiency indicators and applications

    International Nuclear Information System (INIS)

    Wu, Li-Ming; Chen, Bai-Sheng; Bor, Yun-Chang; Wu, Yin-Chin

    2007-01-01

    For the purposes of energy conservation and environmental protection, the government of Taiwan has instigated long-term policies to continuously encourage and assist industry in improving the efficiency of energy utilization. While multiple actions have led to practical energy saving to a limited extent, no strong evidence of improvement in energy efficiency was observed from the energy efficiency indicators (EEI) system, according to the annual national energy statistics and survey. A structural analysis of EEI is needed in order to understand the role that energy efficiency plays in the EEI system. This work uses the Taylor series expansion to develop a structure model for EEI at the level of the process sector of industry. The model is developed on the premise that the design parameters of the process are used in comparison with the operational parameters for energy differences. The utilization index of production capability and the variation index of energy utilization are formulated in the model to describe the differences between EEIs. Both qualitative and quantitative methods for the analysis of energy efficiency and energy savings are derived from the model. Through structural analysis, the model showed that, while the performance of EEI is proportional to the process utilization index of production capability, it is possible that energy may develop in a direction opposite to that of EEI. This helps to explain, at least in part, the inconsistency between EEI and energy savings. An energy-intensive steel plant in Taiwan was selected to show the application of the model. The energy utilization efficiency of the plant was evaluated and the amount of energy that had been saved or over-used in the production process was estimated. Some insights gained from the model outcomes are helpful to further enhance energy efficiency in the plant

  1. An Efficiency Model For Hydrogen Production In A Pressurized Electrolyzer

    Energy Technology Data Exchange (ETDEWEB)

    Smoglie, Cecilia; Lauretta, Ricardo

    2010-09-15

    The use of Hydrogen as clean fuel at a world wide scale requires the development of simple, safe and efficient production and storage technologies. In this work, a methodology is proposed to produce Hydrogen and Oxygen in a self pressurized electrolyzer connected to separate containers that store each of these gases. A mathematical model for Hydrogen production efficiency is proposed to evaluate how such efficiency is affected by parasitic currents in the electrolytic solution. Experimental set-up and results for an electrolyzer are also presented. Comparison of empirical and analytical results shows good agreement.

  2. Isolation of Fully Human Antagonistic RON Antibodies Showing Efficient Block of Downstream Signaling and Cell Migration1

    Science.gov (United States)

    Gunes, Zeynep; Zucconi, Adriana; Cioce, Mario; Meola, Annalisa; Pezzanera, Monica; Acali, Stefano; Zampaglione, Immacolata; De Pratti, Valeria; Bova, Luca; Talamo, Fabio; Demartis, Anna; Monaci, Paolo; La Monica, Nicola; Ciliberto, Gennaro; Vitelli, Alessandra

    2011-01-01

    RON belongs to the c-MET family of receptor tyrosine kinases. As its well-known family member MET, RON and its ligand macrophage-stimulating protein have been implicated in the progression and metastasis of tumors and have been shown to be overexpressed in cancer. We generated and tested a large number of human monoclonal antibodies (mAbs) against human RON. Our screening yielded three high-affinity antibodies that efficiently block ligand-dependent intracellular AKT and MAPK signaling. This effect correlates with the strong reduction of ligand-activated migration of T47D breast cancer cell line. By cross-competition experiments, we showed that the antagonistic antibodies fall into three distinct epitope regions of the RON extracellular Sema domain. Notably, no inhibition of tumor growth was observed in different epithelial tumor xenografts in nude mice with any of the antibodies. These results suggest that distinct properties beside ligand antagonism are required for anti-RON mAbs to exert antitumor effects in vivo. PMID:21286376

  3. Models for efficient integration of solar energy

    DEFF Research Database (Denmark)

    Bacher, Peder

    the available flexibility in the system. In the present thesis methods related to operation of solar energy systems and for optimal energy use in buildings are presented. Two approaches for forecasting of solar power based on numerical weather predictions (NWPs) are presented, they are applied to forecast......Efficient operation of energy systems with substantial amount of renewable energy production is becoming increasingly important. Renewables are dependent on the weather conditions and are therefore by nature volatile and uncontrollable, opposed to traditional energy production based on combustion....... The "smart grid" is a broad term for the technology for addressing the challenge of operating the grid with a large share of renewables. The "smart" part is formed by technologies, which models the properties of the systems and efficiently adapt the load to the volatile energy production, by using...

  4. Two tropical conifers show strong growth and water-use efficiency responses to altered CO2 concentration.

    Science.gov (United States)

    Dalling, James W; Cernusak, Lucas A; Winter, Klaus; Aranda, Jorge; Garcia, Milton; Virgo, Aurelio; Cheesman, Alexander W; Baresch, Andres; Jaramillo, Carlos; Turner, Benjamin L

    2016-11-01

    Conifers dominated wet lowland tropical forests 100 million years ago (MYA). With a few exceptions in the Podocarpaceae and Araucariaceae, conifers are now absent from this biome. This shift to angiosperm dominance also coincided with a large decline in atmospheric CO 2 concentration (c a ). We compared growth and physiological performance of two lowland tropical angiosperms and conifers at c a levels representing pre-industrial (280 ppm), ambient (400 ppm) and Eocene (800 ppm) conditions to explore how differences in c a affect the growth and water-use efficiency (WUE) of seedlings from these groups. Two conifers (Araucaria heterophylla and Podocarpus guatemalensis) and two angiosperm trees (Tabebuia rosea and Chrysophyllum cainito) were grown in climate-controlled glasshouses in Panama. Growth, photosynthetic rates, nutrient uptake, and nutrient use and water-use efficiencies were measured. Podocarpus seedlings showed a stronger (66 %) increase in relative growth rate with increasing c a relative to Araucaria (19 %) and the angiosperms (no growth enhancement). The response of Podocarpus is consistent with expectations for species with conservative growth traits and low mesophyll diffusion conductance. While previous work has shown limited stomatal response of conifers to c a , we found that the two conifers had significantly greater increases in leaf and whole-plant WUE than the angiosperms, reflecting increased photosynthetic rate and reduced stomatal conductance. Foliar nitrogen isotope ratios (δ 15 N) and soil nitrate concentrations indicated a preference in Podocarpus for ammonium over nitrate, which may impact nitrogen uptake relative to nitrate assimilators under high c a SIGNIFICANCE: Podocarps colonized tropical forests after angiosperms achieved dominance and are now restricted to infertile soils. Although limited to a single species, our data suggest that higher c a may have been favourable for podocarp colonization of tropical South America 60

  5. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  6. Interactions to the fifth trophic level: secondary and tertiary parasitoid wasps show extraordinary efficiency in utilizing host resources

    NARCIS (Netherlands)

    Harvey, J.A.; Wagenaar, R.; Bezemer, T.M.

    2009-01-01

    1. Parasitoid wasps are highly efficient organisms at utilizing and assimilating limited resources from their hosts. This study explores interactions over three trophic levels, from the third (primary parasitoid) to the fourth (secondary parasitoid) and terminating in the fifth (tertiary

  7. AN EFFICIENT STRUCTURAL REANALYSIS MODEL FOR ...

    African Journals Online (AJOL)

    be required if complete and exact analysis would be carried out. This paper ... qualities even under significantly large design modifications. A numerical example has been presented to show potential capabilities of theproposed model. INTRODUCTION ... equilibrium conditions in the structural system and the subsequent ...

  8. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  9. The temperate Burkholderia phage AP3 of the Peduovirinae shows efficient antimicrobial activity against B. cenocepacia of the IIIA lineage.

    Science.gov (United States)

    Roszniowski, Bartosz; Latka, Agnieszka; Maciejewska, Barbara; Vandenheuvel, Dieter; Olszak, Tomasz; Briers, Yves; Holt, Giles S; Valvano, Miguel A; Lavigne, Rob; Smith, Darren L; Drulis-Kawa, Zuzanna

    2017-02-01

    Burkholderia phage AP3 (vB_BceM_AP3) is a temperate virus of the Myoviridae and the Peduovirinae subfamily (P2likevirus genus). This phage specifically infects multidrug-resistant clinical Burkholderia cenocepacia lineage IIIA strains commonly isolated from cystic fibrosis patients. AP3 exhibits high pairwise nucleotide identity (61.7 %) to Burkholderia phage KS5, specific to the same B. cenocepacia host, and has 46.7-49.5 % identity to phages infecting other species of Burkholderia. The lysis cassette of these related phages has a similar organization (putative antiholin, putative holin, endolysin, and spanins) and shows 29-98 % homology between specific lysis genes, in contrast to Enterobacteria phage P2, the hallmark phage of this genus. The AP3 and KS5 lysis genes have conserved locations and high amino acid sequence similarity. The AP3 bacteriophage particles remain infective up to 5 h at pH 4-10 and are stable at 60 °C for 30 min, but are sensitive to chloroform, with no remaining infective particles after 24 h of treatment. AP3 lysogeny can occur by stable genomic integration and by pseudo-lysogeny. The lysogenic bacterial mutants did not exhibit any significant changes in virulence compared to wild-type host strain when tested in the Galleria mellonella moth wax model. Moreover, AP3 treatment of larvae infected with B. cenocepacia revealed a significant increase (P < 0.0001) in larvae survival in comparison to AP3-untreated infected larvae. AP3 showed robust lytic activity, as evidenced by its broad host range, the absence of increased virulence in lysogenic isolates, the lack of bacterial gene disruption conditioned by bacterial tRNA downstream integration site, and the absence of detected toxin sequences. These data suggest that the AP3 phage is a promising potent agent against bacteria belonging to the most common B. cenocepacia IIIA lineage strains.

  10. Inertia may limit efficiency of slow flapping flight, but mayflies show a strategy for reducing the power requirements of loiter

    International Nuclear Information System (INIS)

    Usherwood, James R

    2009-01-01

    Predictions from aerodynamic theory often match biological observations very poorly. Many insects and several bird species habitually hover, frequently flying at low advance ratios. Taking helicopter-based aerodynamic theory, wings functioning predominantly for hovering, even for quite small insects, should operate at low angles of attack. However, insect wings operate at very high angles of attack during hovering; reduction in angle of attack should result in considerable energetic savings. Here, I consider the possibility that selection of kinematics is constrained from being aerodynamically optimal due to the inertial power requirements of flapping. Potential increases in aerodynamic efficiency with lower angles of attack during hovering may be outweighed by increases in inertial power due to the associated increases in flapping frequency. For simple hovering, traditional rotary-winged helicopter-like micro air vehicles would be more efficient than their flapping biomimetic counterparts. However, flapping may confer advantages in terms of top speed and manoeuvrability. If flapping-winged micro air vehicles are required to hover or loiter more efficiently, dragonflies and mayflies suggest biomimetic solutions

  11. Spatial occupancy models applied to atlas data show Southern Ground Hornbills strongly depend on protected areas.

    Science.gov (United States)

    Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L

    2014-03-01

    Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.

  12. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  13. Energy Efficiency Model for Induction Furnace

    Science.gov (United States)

    Dey, Asit Kr

    2018-01-01

    In this paper, a system of a solar induction furnace unit was design to find out a new solution for the existing AC power consuming heating process through Supervisory control and data acquisition system. This unit can be connected directly to the DC system without any internal conversion inside the device. The performance of the new system solution is compared with the existing one in terms of power consumption and losses. This work also investigated energy save, system improvement, process control model in a foundry induction furnace heating framework corresponding to PV solar power supply. The results are analysed for long run in terms of saving energy and integrated process system. The data acquisition system base solar foundry plant is an extremely multifaceted system that can be run over an almost innumerable range of operating conditions, each characterized by specific energy consumption. Determining ideal operating conditions is a key challenge that requires the involvement of the latest automation technologies, each one contributing to allow not only the acquisition, processing, storage, retrieval and visualization of data, but also the implementation of automatic control strategies that can expand the achievement envelope in terms of melting process, safety and energy efficiency.

  14. Efficient Neural Network Modeling for Flight and Space Dynamics Simulation

    Directory of Open Access Journals (Sweden)

    Ayman Hamdy Kassem

    2011-01-01

    Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.

  15. A novel halophilic lipase, LipBL, showing high efficiency in the production of eicosapentaenoic acid (EPA.

    Directory of Open Access Journals (Sweden)

    Dolores Pérez

    Full Text Available BACKGROUND: Among extremophiles, halophiles are defined as microorganisms adapted to live and thrive in diverse extreme saline environments. These extremophilic microorganisms constitute the source of a number of hydrolases with great biotechnological applications. The interest to use extremozymes from halophiles in industrial applications is their resistance to organic solvents and extreme temperatures. Marinobacter lipolyticus SM19 is a moderately halophilic bacterium, isolated previously from a saline habitat in South Spain, showing lipolytic activity. METHODS AND FINDINGS: A lipolytic enzyme from the halophilic bacterium Marinobacter lipolyticus SM19 was isolated. This enzyme, designated LipBL, was expressed in Escherichia coli. LipBL is a protein of 404 amino acids with a molecular mass of 45.3 kDa and high identity to class C β-lactamases. LipBL was purified and biochemically characterized. The temperature for its maximal activity was 80°C and the pH optimum determined at 25°C was 7.0, showing optimal activity without sodium chloride, while maintaining 20% activity in a wide range of NaCl concentrations. This enzyme exhibited high activity against short-medium length acyl chain substrates, although it also hydrolyzes olive oil and fish oil. The fish oil hydrolysis using LipBL results in an enrichment of free eicosapentaenoic acid (EPA, but not docosahexaenoic acid (DHA, relative to its levels present in fish oil. For improving the stability and to be used in industrial processes LipBL was immobilized in different supports. The immobilized derivatives CNBr-activated Sepharose were highly selective towards the release of EPA versus DHA. The enzyme is also active towards different chiral and prochiral esters. Exposure of LipBL to buffer-solvent mixtures showed that the enzyme had remarkable activity and stability in all organic solvents tested. CONCLUSIONS: In this study we isolated, purified, biochemically characterized and immobilized a

  16. Strong and Nonspecific Synergistic Antibacterial Efficiency of Antibiotics Combined with Silver Nanoparticles at Very Low Concentrations Showing No Cytotoxic Effect.

    Science.gov (United States)

    Panáček, Aleš; Smékalová, Monika; Kilianová, Martina; Prucek, Robert; Bogdanová, Kateřina; Večeřová, Renata; Kolář, Milan; Havrdová, Markéta; Płaza, Grażyna Anna; Chojniak, Joanna; Zbořil, Radek; Kvítek, Libor

    2015-12-28

    The resistance of bacteria towards traditional antibiotics currently constitutes one of the most important health care issues with serious negative impacts in practice. Overcoming this issue can be achieved by using antibacterial agents with multimode antibacterial action. Silver nano-particles (AgNPs) are one of the well-known antibacterial substances showing such multimode antibacterial action. Therefore, AgNPs are suitable candidates for use in combinations with traditional antibiotics in order to improve their antibacterial action. In this work, a systematic study quantifying the synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs against Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus was performed. Employing the microdilution method as more suitable and reliable than the disc diffusion method, strong synergistic effects were shown for all tested antibiotics combined with AgNPs at very low concentrations of both antibiotics and AgNPs. No trends were observed for synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs, indicating non-specific synergistic effects. Moreover, a very low amount of silver is needed for effective antibacterial action of the antibiotics, which represents an important finding for potential medical applications due to the negligible cytotoxic effect of AgNPs towards human cells at these concentration levels.

  17. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Science.gov (United States)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the

  18. Modeling Patient No-Show History and Predicting Future Outpatient Appointment Behavior in the Veterans Health Administration.

    Science.gov (United States)

    Goffman, Rachel M; Harris, Shannon L; May, Jerrold H; Milicevic, Aleksandra S; Monte, Robert J; Myaskovsky, Larissa; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2017-05-01

    Missed appointments reduce the efficiency of the health care system and negatively impact access to care for all patients. Identifying patients at risk for missing an appointment could help health care systems and providers better target interventions to reduce patient no-shows. Our aim was to develop and test a predictive model that identifies patients that have a high probability of missing their outpatient appointments. Demographic information, appointment characteristics, and attendance history were drawn from the existing data sets from four Veterans Affairs health care facilities within six separate service areas. Past attendance behavior was modeled using an empirical Markov model based on up to 10 previous appointments. Using logistic regression, we developed 24 unique predictive models. We implemented the models and tested an intervention strategy using live reminder calls placed 24, 48, and 72 hours ahead of time. The pilot study targeted 1,754 high-risk patients, whose probability of missing an appointment was predicted to be at least 0.2. Our results indicate that three variables were consistently related to a patient's no-show probability in all 24 models: past attendance behavior, the age of the appointment, and having multiple appointments scheduled on that day. After the intervention was implemented, the no-show rate in the pilot group was reduced from the expected value of 35% to 12.16% (p value < 0.0001). The predictive model accurately identified patients who were more likely to miss their appointments. Applying the model in practice enables clinics to apply more intensive intervention measures to high-risk patients. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  19. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  20. Effective and efficient model clone detection

    DEFF Research Database (Denmark)

    Störrle, Harald

    2015-01-01

    Code clones are a major source of software defects. Thus, it is likely that model clones (i.e., duplicate fragments of models) have a significant negative impact on model quality, and thus, on any software created based on those models, irrespective of whether the software is generated fully...... automatically (“MDD-style”) or hand-crafted following the blueprint defined by the model (“MBSD-style”). Unfortunately, however, model clones are much less well studied than code clones. In this paper, we present a clone detection algorithm for UML domain models. Our approach covers a much greater variety...... of model types than existing approaches while providing high clone detection rates at high speed....

  1. Microarray profiling shows distinct differences between primary tumors and commonly used preclinical models in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Wang, Weining; Iyer, N. Gopalakrishna; Tay, Hsien Ts’ung; Wu, Yonghui; Lim, Tony K. H.; Zheng, Lin; Song, In Chin; Kwoh, Chee Keong; Huynh, Hung; Tan, Patrick O. B.; Chow, Pierce K. H.

    2015-01-01

    Despite advances in therapeutics, outcomes for hepatocellular carcinoma (HCC) remain poor and there is an urgent need for efficacious systemic therapy. Unfortunately, drugs that are successful in preclinical studies often fail in the clinical setting, and we hypothesize that this is due to functional differences between primary tumors and commonly used preclinical models. In this study, we attempt to answer this question by comparing tumor morphology and gene expression profiles between primary tumors, xenografts and HCC cell lines. Hep G2 cell lines and tumor cells from patient tumor explants were subcutaneously (ectopically) injected into the flank and orthotopically into liver parenchyma of Mus Musculus SCID mice. The mice were euthanized after two weeks. RNA was extracted from the tumors, and gene expression profiling was performed using the Gene Chip Human Genome U133 Plus 2.0. Principal component analyses (PCA) and construction of dendrograms were conducted using Partek genomics suite. PCA showed that the commonly used HepG2 cell line model and its xenograft counterparts were vastly different from all fresh primary tumors. Expression profiles of primary tumors were also significantly divergent from their counterpart patient-derived xenograft (PDX) models, regardless of the site of implantation. Xenografts from the same primary tumors were more likely to cluster together regardless of site of implantation, although heat maps showed distinct differences in gene expression profiles between orthotopic and ectopic models. The data presented here challenges the utility of routinely used preclinical models. Models using HepG2 were vastly different from primary tumors and PDXs, suggesting that this is not clinically representative. Surprisingly, site of implantation (orthotopic versus ectopic) resulted in limited impact on gene expression profiles, and in both scenarios xenografts differed significantly from the original primary tumors, challenging the long

  2. Efficient solvers for coupled models in respiratory mechanics.

    Science.gov (United States)

    Verdugo, Francesc; Roth, Christian J; Yoshihara, Lena; Wall, Wolfgang A

    2017-02-01

    We present efficient preconditioners for one of the most physiologically relevant pulmonary models currently available. Our underlying motivation is to enable the efficient simulation of such a lung model on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments. The system of linear equations to be solved using the proposed preconditioners is essentially the monolithic system arising in fluid-structure interaction (FSI) extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. The numerical examples show that the resulting preconditioners approach the optimal performance of multigrid methods, even though the lung model is a complex multiphysics problem. Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformations is modeled with additional algebraic constraints. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Energy technologies and energy efficiency in economic modelling

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological ...... of renewable energy and especially wind power will increase the rate of efficiency improvement. A technologically based model in this case indirectly makes the energy efficiency endogenous in the aggregate energy-economy model....... technological development. This paper examines the effect on aggregate energy efficiency of using technological models to describe a number of specific technologies and of incorporating these models in an economic model. Different effects from the technology representation are illustrated. Vintage effects...... illustrates the dependence of average efficiencies and productivity on capacity utilisation rates. In the long run regulation induced by environmental policies are also very important for the improvement of aggregate energy efficiency in the energy supply sector. A Danish policy to increase the share...

  4. Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.

    Directory of Open Access Journals (Sweden)

    Estefanía de Munck

    Full Text Available Amyotrophic lateral sclerosis (ALS is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA, a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.

  5. Model of the synthesis of trisporic acid in Mucorales showing bistability.

    Science.gov (United States)

    Werner, S; Schroeter, A; Schimek, C; Vlaic, S; Wöstemeyer, J; Schuster, S

    2012-12-01

    An important substance in the signalling between individuals of Mucor-like fungi is trisporic acid (TA). This compound, together with some of its precursors, serves as a pheromone in mating between (+)- and (-)-mating types. Moreover, intermediates of the TA pathway are exchanged between the two mating partners. Based on differential equations, mathematical models of the synthesis pathways of TA in the two mating types of an idealised Mucor-fungus are here presented. These models include the positive feedback of TA on its own synthesis. The authors compare three sub-models in view of bistability, robustness and the reversibility of transitions. The proposed modelling study showed that, in a system where intermediates are exchanged, a reversible transition between the two stable steady states occurs, whereas an exchange of the end product leads to an irreversible transition. The reversible transition is physiologically favoured, because the high-production state of TA must come to an end eventually. Moreover, the exchange of intermediates and TA is compared with the 3-way handshake widely used by computers linked in a network.

  6. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  7. Histidine decarboxylase knockout mice, a genetic model of Tourette syndrome, show repetitive grooming after induced fear.

    Science.gov (United States)

    Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher

    2015-05-19

    Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  9. Information, complexity and efficiency: The automobile model

    Energy Technology Data Exchange (ETDEWEB)

    Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  10. Business process model repositories : efficient process retrieval

    NARCIS (Netherlands)

    Yan, Z.

    2012-01-01

    As organizations increasingly work in process-oriented manner, the number of business process models that they develop and have to maintain increases. As a consequence, it has become common for organizations to have collections of hundreds or even thousands of business process models. When a

  11. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  12. MODEL TESTING OF LOW PRESSURE HYDRAULIC TURBINE WITH HIGHER EFFICIENCY

    Directory of Open Access Journals (Sweden)

    V. K. Nedbalsky

    2007-01-01

    Full Text Available A design of low pressure turbine has been developed and it is covered by an invention patent and a useful model patent. Testing of the hydraulic turbine model has been carried out when it was installed on a vertical shaft. The efficiency was equal to 76–78 % that exceeds efficiency of the known low pressure blade turbines. 

  13. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    Science.gov (United States)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  14. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  15. Modelling household responses to energy efficiency interventions ...

    African Journals Online (AJOL)

    2010-11-01

    Nov 1, 2010 ... to interventions aimed at reducing energy consumption (specifically the use of .... 4 A system dynamics model of electricity consumption ...... to base comparisons on overly detailed quantitative predictions of behaviour.

  16. Ecological efficiency in China and its influencing factors-a super-efficient SBM metafrontier-Malmquist-Tobit model study.

    Science.gov (United States)

    Ma, Xiaojun; Wang, Changxin; Yu, Yuanbo; Li, Yudong; Dong, Biying; Zhang, Xinyu; Niu, Xueqi; Yang, Qian; Chen, Ruimin; Li, Yifan; Gu, Yihan

    2018-05-15

    Ecological problem is one of the core issues that restrain China's economic development at present, and it is urgently needed to be solved properly and effectively. Based on panel data from 30 regions, this paper uses a super efficiency slack-based measure (SBM) model that introduces the undesirable output to calculate the ecological efficiency, and then uses traditional and metafrontier-Malmquist index method to study regional change trends and technology gap ratios (TGRs). Finally, the Tobit regression and principal component analysis methods are used to analysis the main factors affecting eco-efficiency and impact degree. The results show that about 60% of China's provinces have effective eco-efficiency, and the overall ecological efficiency of China is at the superior middling level, but there is a serious imbalance among different provinces and regions. Ecological efficiency has an obvious spatial cluster effect. There are differences among regional TGR values. Most regions show a downward trend and the phenomenon of focusing on economic development at the expense of ecological protection still exists. Expansion of opening to the outside, increases in R&D spending, and improvement of population urbanization rate have positive effects on eco-efficiency. Blind economic expansion, increases of industrial structure, and proportion of energy consumption have negative effects on eco-efficiency.

  17. A model for efficient management of electrical assets

    International Nuclear Information System (INIS)

    Alonso Guerreiro, A.

    2008-01-01

    At the same time that energy demand grows faster than the investments in electrical installations, the older capacity is reaching the end of its useful life. The need of running all those capacity without interruptions and an efficient maintenance of its assets, are the two current key points for power generation, transmission and distribution systems. This paper tries to show the reader a model of management which makes possible an effective management of assets with a strict control cost, and which includes those key points, centred at predictive techniques, involving all the departments of the organization and which goes further on considering the maintenance like a simple reparation or substitution of broken down units. Therefore, it becomes precise a model with three basic lines: supply guarantee, quality service and competitively, in order to allow the companies to reach the current demands which characterize the power supply. (Author) 5 refs

  18. Phenolic Acids from Wheat Show Different Absorption Profiles in Plasma: A Model Experiment with Catheterized Pigs

    DEFF Research Database (Denmark)

    Nørskov, Natalja; Hedemann, Mette Skou; Theil, Peter Kappel

    2013-01-01

    The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study...... the absorption of phenolic acids. The difference between the artery and the vein for all phenolic acids was small, indicating that the release of phenolic acids in the large intestine was not sufficient to create a porto-arterial concentration difference. Although, the porto-arterial difference was small...... consumed. Benzoic acid derivatives showed low concentration in the plasma (phenolic acids, likely because it is an intermediate in the phenolic acid metabolism...

  19. Etoposide Incorporated into Camel Milk Phospholipids Liposomes Shows Increased Activity against Fibrosarcoma in a Mouse Model

    Directory of Open Access Journals (Sweden)

    Hamzah M. Maswadeh

    2015-01-01

    Full Text Available Phospholipids were isolated from camel milk and identified by using high performance liquid chromatography and gas chromatography-mass spectrometry (GC/MS. Anticancer drug etoposide (ETP was entrapped in liposomes, prepared from camel milk phospholipids, to determine its activity against fibrosarcoma in a murine model. Fibrosarcoma was induced in mice by injecting benzopyrene (BAP and tumor-bearing mice were treated with various formulations of etoposide, including etoposide entrapped camel milk phospholipids liposomes (ETP-Cam-liposomes and etoposide-loaded DPPC-liposomes (ETP-DPPC-liposomes. The tumor-bearing mice treated with ETP-Cam-liposomes showed slow progression of tumors and increased survival compared to free ETP or ETP-DPPC-liposomes. These results suggest that ETP-Cam-liposomes may prove to be a better drug delivery system for anticancer drugs.

  20. Rubber particle proteins, HbREF and HbSRPP, show different interactions with model membranes.

    Science.gov (United States)

    Berthelot, Karine; Lecomte, Sophie; Estevez, Yannick; Zhendre, Vanessa; Henry, Sarah; Thévenot, Julie; Dufourc, Erick J; Alves, Isabel D; Peruch, Frédéric

    2014-01-01

    The biomembrane surrounding rubber particles from the hevea latex is well known for its content of numerous allergen proteins. HbREF (Hevb1) and HbSRPP (Hevb3) are major components, linked on rubber particles, and they have been shown to be involved in rubber synthesis or quality (mass regulation), but their exact function is still to be determined. In this study we highlighted the different modes of interactions of both recombinant proteins with various membrane models (lipid monolayers, liposomes or supported bilayers, and multilamellar vesicles) to mimic the latex particle membrane. We combined various biophysical methods (polarization-modulation-infrared reflection-adsorption spectroscopy (PM-IRRAS)/ellipsometry, attenuated-total reflectance Fourier-transform infrared (ATR-FTIR), solid-state nuclear magnetic resonance (NMR), plasmon waveguide resonance (PWR), fluorescence spectroscopy) to elucidate their interactions. Small rubber particle protein (SRPP) shows less affinity than rubber elongation factor (REF) for the membranes but displays a kind of "covering" effect on the lipid headgroups without disturbing the membrane integrity. Its structure is conserved in the presence of lipids. Contrarily, REF demonstrates higher membrane affinity with changes in its aggregation properties, the amyloid nature of REF, which we previously reported, is not favored in the presence of lipids. REF binds and inserts into membranes. The membrane integrity is highly perturbed, and we suspect that REF is even able to remove lipids from the membrane leading to the formation of mixed micelles. These two homologous proteins show affinity to all membrane models tested but neatly differ in their interacting features. This could imply differential roles on the surface of rubber particles. © 2013.

  1. Modelling and analysis of solar cell efficiency distributions

    Science.gov (United States)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  2. Maintaining formal models of living guidelines efficiently

    NARCIS (Netherlands)

    Seyfang, Andreas; Martínez-Salvador, Begoña; Serban, Radu; Wittenberg, Jolanda; Miksch, Silvia; Marcos, Mar; Ten Teije, Annette; Rosenbrand, Kitty C J G M

    2007-01-01

    Translating clinical guidelines into formal models is beneficial in many ways, but expensive. The progress in medical knowledge requires clinical guidelines to be updated at relatively short intervals, leading to the term living guideline. This causes potentially expensive, frequent updates of the

  3. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF). Th...

  4. Ab Initio Modeling Of Friction Reducing Agents Shows Quantum Mechanical Interactions Can Have Macroscopic Manifestation.

    Science.gov (United States)

    Hernández Velázquez, J D; Barroso-Flores, J; Gama Goicochea, A

    2016-11-23

    Two of the most commonly encountered friction-reducing agents used in plastic sheet production are the amides known as erucamide and behenamide, which despite being almost identical chemically, lead to markedly different values of the friction coefficient. To understand the origin of this contrasting behavior, in this work we model brushes made of these two types of linear-chain molecules using quantum mechanical numerical simulations under the density functional theory at the B97D/6-31G(d,p) level of theory. Four chains of erucamide and behenamide were linked to a 2 × 10 zigzag graphene sheet and optimized both in vacuum and in continuous solvent using the SMD implicit solvation model. We find that erucamide chains tend to remain closer together through π-π stacking interactions arising from the double bonds located at C13-C14, a feature behenamide lacks, and thus a more spread configuration is obtained with the latter. It is argued that this arrangement of the erucamide chains is responsible for the lower friction coefficient of erucamide brushes, compared with behenamide brushes, which is a macroscopic consequence of cooperative quantum mechanical interactions. While only quantum level interactions are modeled here, we show that behenamide chains are more spread out in the brush than erucamide chains as a consequence of those interactions. The spread-out configuration allows more solvent particles to penetrate the brush, leading in turn to more friction, in agreement with macroscopic measurements and mesoscale simulations of the friction coefficient reported in the literature.

  5. Efficient Turbulence Modeling for CFD Wake Simulations

    DEFF Research Database (Denmark)

    van der Laan, Paul

    Wind turbine wakes can cause 10-20% annual energy losses in wind farms, and wake turbulence can decrease the lifetime of wind turbine blades. One way of estimating these effects is the use of computational fluid dynamics (CFD) to simulate wind turbines wakes in the atmospheric boundary layer. Since...... this flow is in the high Reynolds number regime, it is mainly dictated by turbulence. As a result, the turbulence modeling in CFD dominates the wake characteristics, especially in Reynolds-averaged Navier-Stokes (RANS). The present work is dedicated to study and develop RANS-based turbulence models...... verified with a grid dependency study. With respect to the standard k-ε EVM, the k-ε- fp EVM compares better with measurements of the velocity deficit, especially in the near wake, which translates to improved power deficits of the first wind turbines in a row. When the CFD metholody is applied to a large...

  6. An Efficient Virtual Trachea Deformation Model

    Directory of Open Access Journals (Sweden)

    Cui Tong

    2016-01-01

    Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.

  7. Efficient Matrix Models for Relational Learning

    Science.gov (United States)

    2009-10-01

    base learners and h1:r is the ensemble learner. For example, consider the case where h1, . . . , hr are linear discriminants. The weighted vote of...a multilinear form naturally leads one to consider tensor factorization: e.g., UAV T is a special case of Tucker decomposition [129] on a 2D- tensor , a...matrix. Our five modeling choices can also be used to differentiate tensor factorizations, but the choices may be subtler for tensors than for

  8. Exploiting partial knowledge for efficient model analysis

    OpenAIRE

    Macedo, Nuno; Cunha, Alcino; Pessoa, Eduardo José Dias

    2017-01-01

    The advancement of constraint solvers and model checkers has enabled the effective analysis of high-level formal specification languages. However, these typically handle a specification in an opaque manner, amalgamating all its constraints in a single monolithic verification task, which often proves to be a performance bottleneck. This paper addresses this issue by proposing a solving strategy that exploits user-provided partial knowledge, namely by assigning symbolic bounds to the problem’s ...

  9. An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models

    DEFF Research Database (Denmark)

    Nielsen, Jens Dalgaard; Jaeger, Manfred

    2006-01-01

    In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...

  10. Model calibration for building energy efficiency simulation

    International Nuclear Information System (INIS)

    Mustafaraj, Giorgio; Marini, Dashamir; Costa, Andrea; Keane, Marcus

    2014-01-01

    Highlights: • Developing a 3D model relating to building architecture, occupancy and HVAC operation. • Two calibration stages developed, final model providing accurate results. • Using an onsite weather station for generating the weather data file in EnergyPlus. • Predicting thermal behaviour of underfloor heating, heat pump and natural ventilation. • Monthly energy saving opportunities related to heat pump of 20–27% was identified. - Abstract: This research work deals with an Environmental Research Institute (ERI) building where an underfloor heating system and natural ventilation are the main systems used to maintain comfort condition throughout 80% of the building areas. Firstly, this work involved developing a 3D model relating to building architecture, occupancy and HVAC operation. Secondly, the calibration methodology, which consists of two levels, was then applied in order to insure accuracy and reduce the likelihood of errors. To further improve the accuracy of calibration a historical weather data file related to year 2011, was created from the on-site local weather station of ERI building. After applying the second level of calibration process, the values of Mean bias Error (MBE) and Cumulative Variation of Root Mean Squared Error (CV(RMSE)) on hourly based analysis for heat pump electricity consumption varied within the following ranges: (MBE) hourly from −5.6% to 7.5% and CV(RMSE) hourly from 7.3% to 25.1%. Finally, the building was simulated with EnergyPlus to identify further possibilities of energy savings supplied by a water to water heat pump to underfloor heating system. It found that electricity consumption savings from the heat pump can vary between 20% and 27% on monthly bases

  11. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  12. Vortexlet models of flapping flexible wings show tuning for force production and control

    International Nuclear Information System (INIS)

    Mountcastle, A M; Daniel, T L

    2010-01-01

    Insect wings are compliant structures that experience deformations during flight. Such deformations have recently been shown to substantially affect induced flows, with appreciable consequences to flight forces. However, there are open questions related to the aerodynamic mechanisms underlying the performance benefits of wing deformation, as well as the extent to which such deformations are determined by the boundary conditions governing wing actuation together with mechanical properties of the wing itself. Here we explore aerodynamic performance parameters of compliant wings under periodic oscillations, subject to changes in phase between wing elevation and pitch, and magnitude and spatial pattern of wing flexural stiffness. We use a combination of computational structural mechanics models and a 2D computational fluid dynamics approach to ask how aerodynamic force production and control potential are affected by pitch/elevation phase and variations in wing flexural stiffness. Our results show that lift and thrust forces are highly sensitive to flexural stiffness distributions, with performance optima that lie in different phase regions. These results suggest a control strategy for both flying animals and engineering applications of micro-air vehicles.

  13. Nanotoxicity modelling and removal efficiencies of ZnONP.

    Science.gov (United States)

    Fikirdeşici Ergen, Şeyda; Üçüncü Tunca, Esra

    2018-01-02

    In this paper the aim is to investigate the toxic effect of zinc oxide nanoparticles (ZnONPs) and is to analyze the removal of ZnONP in aqueous medium by the consortium consisted of Daphnia magna and Lemna minor. Three separate test groups are formed: L. minor ([Formula: see text]), D. magna ([Formula: see text]), and L. minor + D. magna ([Formula: see text]) and all these test groups are exposed to three different nanoparticle concentrations ([Formula: see text]). Time-dependent, concentration-dependent, and group-dependent removal efficiencies are statistically compared by non-parametric Mann-Whitney U test and statistically significant differences are observed. The optimum removal values are observed at the highest concentration [Formula: see text] for [Formula: see text], [Formula: see text] for [Formula: see text]and [Formula: see text] for [Formula: see text] and realized at [Formula: see text] for all test groups [Formula: see text]. There is no statistically significant differences in removal at low concentrations [Formula: see text] in terms of groups but [Formula: see text] test groups are more efficient than [Formula: see text] test groups in removal of ZnONP, at [Formula: see text] concentration. Regression analysis is also performed for all prediction models. Different models are tested and it is seen that cubic models show the highest predicted values (R 2 ). In toxicity models, R 2 values are obtained at (0.892, 0.997) interval. A simple solution-phase method is used to synthesize ZnO nanoparticles. Dynamic Light Scattering and X-Ray Diffraction (XRD) are used to detect the particle size of synthesized ZnO nanoparticles.

  14. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    Science.gov (United States)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  15. Urban eco-efficiency and system dynamics modelling

    Energy Technology Data Exchange (ETDEWEB)

    Hradil, P., Email: petr.hradil@vtt.fi

    2012-06-15

    Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)

  16. Ozonolysis of Model Olefins-Efficiency of Antiozonants

    NARCIS (Netherlands)

    Huntink, N.M.; Datta, Rabin; Talma, Auke; Noordermeer, Jacobus W.M.

    2006-01-01

    In this study, the efficiency of several potential long lasting antiozonants was studied by ozonolysis of model olefins. 2-Methyl-2-pentene was selected as a model for natural rubber (NR) and 5-phenyl-2-hexene as a model for styrene butadiene rubber (SBR). A comparison was made between the

  17. The effectiveness and efficiency of model driven game design

    NARCIS (Netherlands)

    Dormans, Joris

    2012-01-01

    In order for techniques from Model Driven Engineering to be accepted at large by the game industry, it is critical that the effectiveness and efficiency of these techniques are proven for game development. There is no lack of game design models, but there is no model that has surfaced as an industry

  18. Efficiency Of Different Teaching Models In Teaching Of Frisbee Ultimate

    Directory of Open Access Journals (Sweden)

    Žuffová Zuzana

    2015-05-01

    Full Text Available The aim of the study was to verify the efficiency of two frisbee ultimate teaching models at 8-year grammar schools relative to age. In the experimental group was used a game based model (Teaching Games for Understanding and in the control group the traditional model based on teaching techniques. 6 groups of female students took part in experiment: experimental group 1 (n=10, age=11.6, experimental group 2 (n=12, age=13.8, experimental group 3 (n=14, age =15.8, control group 1 (n=11, age =11.7, control group 2 (n=10, age =13.8 and control group 3 (n=9, age =15.8. Efficiency of the teaching models was evaluated based of game performance and special knowledge results. Game performance was evaluated by the method of game performance assessment based on GPAI (Game Performance Assessment Instrument through video record. To verify level of knowledge, we used a knowledge test, which consisted of questions related to the rules and tactics knowledge of frisbee ultimate. To perform statistical evaluation Mann-Whitney U-test was used. Game performance assessment and knowledge level indicated higher efficiency of TGfU in general, but mostly statistically insignificant. Experimental groups 1 and 2 were significantly better in the indicator that evaluates tactical aspect of game performance - decision making (p<0.05. Experimental group 3 was better in the indicator that evaluates skill execution - disc catching. The results showed that the students of the classes taught by game based model reached partially better game performance in general. Experimental groups achieved from 79.17 % to 80 % of correct answers relating to the rules and from 75 % to 87.5 % of correct answers relating to the tactical knowledge in the knowledge test. Control groups achieved from 57.69 % to 72.22 % of correct answers relating to the rules and from 51.92 % to 72.22 % of correct answers relating to the tactical knowledge in the knowledge test.

  19. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  20. An efficient and simplified model for forecasting using SRM

    International Nuclear Information System (INIS)

    Asif, H.M.; Hyat, M.F.; Ahmad, T.

    2014-01-01

    Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines), provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory), SRM (Structural Risk Minimization )Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression) for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building. (author)

  1. An Efficient and Simplified Model for Forecasting using SRM

    Directory of Open Access Journals (Sweden)

    Hafiz Muhammad Shahzad Asif

    2014-01-01

    Full Text Available Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines, provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory, SRM (Structural Risk Minimization Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building

  2. Showing a model's eye movements in examples does not improve learning of problem-solving tasks

    NARCIS (Netherlands)

    van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara

    2016-01-01

    Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an

  3. Evaluating Energy Efficiency Policies with Energy-Economy Models

    Energy Technology Data Exchange (ETDEWEB)

    Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.

    2010-08-01

    The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.

  4. Modeling of Methods to Control Heat-Consumption Efficiency

    Science.gov (United States)

    Tsynaeva, E. A.; Tsynaeva, A. A.

    2016-11-01

    In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.

  5. Management Index Systems and Energy Efficiency Diagnosis Model for Power Plant: Cases in China

    Directory of Open Access Journals (Sweden)

    Jing-Min Wang

    2016-01-01

    Full Text Available In recent years, the energy efficiency of thermal power plant largely contributes to that of the industry. A thorough understanding of influencing factors, as well as the establishment of scientific and comprehensive diagnosis model, plays a key role in the operational efficiency and competitiveness for the thermal power plant. Referring to domestic and abroad researches towards energy efficiency management, based on Cloud model and data envelopment analysis (DEA model, a qualitative and quantitative index system and a comprehensive diagnostic model (CDM are construed. To testify rationality and usability of CDM, case studies of large-scaled Chinese thermal power plants have been conducted. In this case, CDM excavates such qualitative factors as technology, management, and so forth. The results shows that, compared with conventional model, which only considered production running parameters, the CDM bears better adaption to reality. It can provide entities with efficient instruments for energy efficiency diagnosis.

  6. OEDGE modeling of plasma contamination efficiency of Ar puffing from different divertor locations in EAST

    Science.gov (United States)

    Pengfei, ZHANG; Ling, ZHANG; Zhenwei, WU; Zong, XU; Wei, GAO; Liang, WANG; Qingquan, YANG; Jichan, XU; Jianbin, LIU; Hao, QU; Yong, LIU; Juan, HUANG; Chengrui, WU; Yumei, HOU; Zhao, JIN; J, D. ELDER; Houyang, GUO

    2018-04-01

    Modeling with OEDGE was carried out to assess the initial and long-term plasma contamination efficiency of Ar puffing from different divertor locations, i.e. the inner divertor, the outer divertor and the dome, in the EAST superconducting tokamak for typical ohmic plasma conditions. It was found that the initial Ar contamination efficiency is dependent on the local plasma conditions at the different gas puff locations. However, it quickly approaches a similar steady state value for Ar recycling efficiency >0.9. OEDGE modeling shows that the final equilibrium Ar contamination efficiency is significantly lower for the more closed lower divertor than that for the upper divertor.

  7. Modeling adaptation of carbon use efficiency in microbial communities

    Directory of Open Access Journals (Sweden)

    Steven D Allison

    2014-10-01

    Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.

  8. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    NARCIS (Netherlands)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; Asseng, Senthold; Baranowski, Piotr; Basso, Bruno; Bodin, Per; Buis, Samuel; Cammarano, Davide; Deligios, Paola; Destain, Marie France; Dumont, Benjamin; Ewert, Frank; Ferrise, Roberto; François, Louis; Gaiser, Thomas; Hlavinka, Petr; Jacquemin, Ingrid; Kersebaum, Kurt Christian; Kollas, Chris; Krzyszczak, Jaromir; Lorite, Ignacio J.; Minet, Julien; Minguez, M.I.; Montesino, Manuel; Moriondo, Marco; Müller, Christoph; Nendel, Claas; Öztürk, Isik; Perego, Alessia; Rodríguez, Alfredo; Ruane, Alex C.; Ruget, Françoise; Sanna, Mattia; Semenov, Mikhail A.; Slawinski, Cezary; Stratonovitch, Pierre; Supit, Iwan; Waha, Katharina; Wang, Enli; Wu, Lianhai; Zhao, Zhigan; Rötter, Reimund P.

    2018-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in

  9. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    Czech Academy of Sciences Publication Activity Database

    Fronzek, S.; Pirttioja, N. K.; Carter, T. R.; Bindi, M.; Hoffmann, H.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, Miroslav; Acutis, M.; Asseng, S.; Baranowski, P.; Basso, B.; Bodin, P.; Buis, S.; Cammarano, D.; Deligios, P.; Destain, M. F.; Dumont, B.; Ewert, F.; Ferrise, R.; Francois, L.; Gaiser, T.; Hlavinka, Petr; Jacquemin, I.; Kersebaum, K. C.; Kollas, C.; Krzyszczak, J.; Lorite, I. J.; Minet, J.; Ines Minguez, M.; Montesino, M.; Moriondo, M.; Mueller, C.; Nendel, C.; Öztürk, I.; Perego, A.; Rodriguez, A.; Ruane, A. C.; Ruget, F.; Sanna, M.; Semenov, M. A.; Slawinski, C.; Stratonovitch, P.; Supit, I.; Waha, K.; Wang, E.; Wu, L.; Zhao, Z.; Rötter, R.

    2018-01-01

    Roč. 159, jan (2018), s. 209-224 ISSN 0308-521X Institutional support: RVO:86652079 Keywords : climate - change * crop models * probabilistic assessment * simulating impacts * british catchments * uncertainty * europe * productivity * calibration * adaptation * Classification * Climate change * Crop model * Ensemble * Sensitivity analysis * Wheat Subject RIV: GC - Agronomy OBOR OECD: Agronomy, plant breeding and plant protection Impact factor: 2.571, year: 2016

  10. Model-based and model-free “plug-and-play” building energy efficient control

    International Nuclear Information System (INIS)

    Baldi, Simone; Michailidis, Iakovos; Ravanis, Christos; Kosmatopoulos, Elias B.

    2015-01-01

    Highlights: • “Plug-and-play” Building Optimization and Control (BOC) driven by building data. • Ability to handle the large-scale and complex nature of the BOC problem. • Adaptation to learn the optimal BOC policy when no building model is available. • Comparisons with rule-based and advanced BOC strategies. • Simulation and real-life experiments in a ten-office building. - Abstract: Considerable research efforts in Building Optimization and Control (BOC) have been directed toward the development of “plug-and-play” BOC systems that can achieve energy efficiency without compromising thermal comfort and without the need of qualified personnel engaged in a tedious and time-consuming manual fine-tuning phase. In this paper, we report on how a recently introduced Parametrized Cognitive Adaptive Optimization – abbreviated as PCAO – can be used toward the design of both model-based and model-free “plug-and-play” BOC systems, with minimum human effort required to accomplish the design. In the model-based case, PCAO assesses the performance of its control strategy via a simulation model of the building dynamics; in the model-free case, PCAO optimizes its control strategy without relying on any model of the building dynamics. Extensive simulation and real-life experiments performed on a 10-office building demonstrate the effectiveness of the PCAO–BOC system in providing significant energy efficiency and improved thermal comfort. The mechanisms embedded within PCAO render it capable of automatically and quickly learning an efficient BOC strategy either in the presence of complex nonlinear simulation models of the building dynamics (model-based) or when no model for the building dynamics is available (model-free). Comparative studies with alternative state-of-the-art BOC systems show the effectiveness of the PCAO–BOC solution

  11. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  12. Metabolic modeling of energy balances in Mycoplasma hyopneumoniae shows that pyruvate addition increases growth rate.

    Science.gov (United States)

    Kamminga, Tjerko; Slagman, Simen-Jan; Bijlsma, Jetta J E; Martins Dos Santos, Vitor A P; Suarez-Diez, Maria; Schaap, Peter J

    2017-10-01

    Mycoplasma hyopneumoniae is cultured on large-scale to produce antigen for inactivated whole-cell vaccines against respiratory disease in pigs. However, the fastidious nutrient requirements of this minimal bacterium and the low growth rate make it challenging to reach sufficient biomass yield for antigen production. In this study, we sequenced the genome of M. hyopneumoniae strain 11 and constructed a high quality constraint-based genome-scale metabolic model of 284 chemical reactions and 298 metabolites. We validated the model with time-series data of duplicate fermentation cultures to aim for an integrated model describing the dynamic profiles measured in fermentations. The model predicted that 84% of cellular energy in a standard M. hyopneumoniae cultivation was used for non-growth associated maintenance and only 16% of cellular energy was used for growth and growth associated maintenance. Following a cycle of model-driven experimentation in dedicated fermentation experiments, we were able to increase the fraction of cellular energy used for growth through pyruvate addition to the medium. This increase in turn led to an increase in growth rate and a 2.3 times increase in the total biomass concentration reached after 3-4 days of fermentation, enhancing the productivity of the overall process. The model presented provides a solid basis to understand and further improve M. hyopneumoniae fermentation processes. Biotechnol. Bioeng. 2017;114: 2339-2347. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Energy efficiency resource modeling in generation expansion planning

    International Nuclear Information System (INIS)

    Ghaderi, A.; Parsa Moghaddam, M.; Sheikh-El-Eslami, M.K.

    2014-01-01

    Energy efficiency plays an important role in mitigating energy security risks and emission problems. In this paper, energy efficiency resources are modeled as efficiency power plants (EPP) to evaluate their impacts on generation expansion planning (GEP). The supply curve of EPP is proposed using the production function of electricity consumption. A decision making framework is also presented to include EPP in GEP problem from an investor's point of view. The revenue of EPP investor is obtained from energy cost reduction of consumers and does not earn any income from electricity market. In each stage of GEP, a bi-level model for operation problem is suggested: the upper-level represents profit maximization of EPP investor and the lower-level corresponds to maximize the social welfare. To solve the bi-level problem, a fixed-point iteration algorithm is used known as diagonalization method. Energy efficiency feed-in tariff is investigated as a regulatory support scheme to encourage the investor. Results pertaining to a case study are simulated and discussed. - Highlights: • An economic model for energy efficiency programs is presented. • A framework is provided to model energy efficiency resources in GEP problem. • FIT is investigated as a regulatory support scheme to encourage the EPP investor. • The capacity expansion is delayed and reduced with considering EPP in GEP. • FIT-II can more effectively increase the energy saving compared to FIT-I

  14. Environmental efficiency analysis of power industry in China based on an entropy SBM model

    International Nuclear Information System (INIS)

    Zhou, Yan; Xing, Xinpeng; Fang, Kuangnan; Liang, Dapeng; Xu, Chunlin

    2013-01-01

    In order to assess the environmental efficiency of power industry in China, this paper first proposes a new non-radial DEA approach by integrating the entropy weight and the SBM model. This will improve the assessment reliability and reasonableness. Using the model, this study then evaluates the environmental efficiency of the Chinese power industry at the provincial level during 2005–2010. The results show a marked difference in environmental efficiency of the power industry among Chinese provinces. Although the annual, average, environmental efficiency level fluctuates, there is an increasing trend. The Tobit regression analysis reveals the innovation ability of enterprises, the proportion of electricity generated by coal-fired plants and the generation capacity have a significantly positive effect on environmental efficiency. However the waste fees levied on waste discharge and investment in industrial pollutant treatment are negatively associated with environmental efficiency. - Highlights: ► We assess the environmental efficiency of power industry in China by E-SBM model. ► Environmental efficiency of power industry is different among provinces. ► Efficiency stays at a higher level in the eastern and the western area. ► Proportion of coal-fired plants has a positive effect on the efficiency. ► Waste fees and the investment have a negative effect on the efficiency

  15. Neuro-fuzzy modelling of hydro unit efficiency

    International Nuclear Information System (INIS)

    Iliev, Atanas; Fushtikj, Vangel

    2003-01-01

    This paper presents neuro-fuzzy method for modeling of the hydro unit efficiency. The proposed method uses the characteristics of the fuzzy systems as universal function approximates, as well the abilities of the neural networks to adopt the parameters of the membership's functions and rules in the consequent part of the developed fuzzy system. Developed method is practically applied for modeling of the efficiency of unit which will be installed in the hydro power plant Kozjak. Comparison of the performance of the derived neuro-fuzzy method with several classical polynomials models is also performed. (Author)

  16. Evaluation of discrete modeling efficiency of asynchronous electric machines

    OpenAIRE

    Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna

    2011-01-01

    In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.

  17. Evaluating energy efficiency policies with energy-economy models

    NARCIS (Netherlands)

    Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.

    2010-01-01

    The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically

  18. Efficient Modelling and Generation of Markov Automata (extended version)

    NARCIS (Netherlands)

    Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette

    2012-01-01

    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the

  19. Rigid-beam model of a high-efficiency magnicon

    International Nuclear Information System (INIS)

    Rees, D.E.; Tallerico, P.J.; Humphries, S.J. Jr.

    1993-01-01

    The magnicon is a new type of high-efficiency deflection-modulated amplifier developed at the Institute of Nuclear Physics in Novosibirsk, Russia. The prototype pulsed magnicon achieved an output power of 2.4 MW and an efficiency of 73% at 915 MHz. This paper presents the results of a rigid-beam model for a 700-MHz, 2.5-MW 82%-efficient magnicon. The rigid-beam model allows for characterization of the beam dynamics by tracking only a single electron. The magnicon design presented consists of a drive cavity; passive cavities; a pi-mode, coupled-deflection cavity; and an output cavity. It represents an optimized design. The model is fully self-consistent, and this paper presents the details of the model and calculated performance of a 2.5-MW magnicon

  20. Simple solvable energy-landscape model that shows a thermodynamic phase transition and a glass transition.

    Science.gov (United States)

    Naumis, Gerardo G

    2012-06-01

    When a liquid melt is cooled, a glass or phase transition can be obtained depending on the cooling rate. Yet, this behavior has not been clearly captured in energy-landscape models. Here, a model is provided in which two key ingredients are considered in the landscape, metastable states and their multiplicity. Metastable states are considered as in two level system models. However, their multiplicity and topology allows a phase transition in the thermodynamic limit for slow cooling, while a transition to the glass is obtained for fast cooling. By solving the corresponding master equation, the minimal speed of cooling required to produce the glass is obtained as a function of the distribution of metastable states.

  1. Modeled hydrologic metrics show links between hydrology and the functional composition of stream assemblages.

    Science.gov (United States)

    Patrick, Christopher J; Yuan, Lester L

    2017-07-01

    Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.

  2. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Modeling technical efficiency of inshore fishery using data envelopment analysis

    Science.gov (United States)

    Rahman, Rahayu; Zahid, Zalina; Khairi, Siti Shaliza Mohd; Hussin, Siti Aida Sheikh

    2016-10-01

    Fishery industry contributes significantly to the economy of Malaysia. This study utilized Data Envelopment Analysis application in estimating the technical efficiency of fishery in Terengganu, a state on the eastern coast of Peninsular Malaysia, based on multiple output, i.e. total fish landing and income of fishermen with six inputs, i.e. engine power, vessel size, number of trips, number of workers, cost and operation distance. The data were collected by survey conducted between November and December 2014. The decision making units (DMUs) involved 100 fishermen from 10 fishery areas. The result showed that the technical efficiency in Season I (dry season) and Season II (rainy season) were 90.2% and 66.7% respectively. About 27% of the fishermen were rated to be efficient during Season I, meanwhile only 13% of the fishermen achieved full efficiency 100% during Season II. The results also found out that there was a significance difference in the efficiency performance between the fishery areas.

  4. Efficient modeling of vector hysteresis using fuzzy inference systems

    International Nuclear Information System (INIS)

    Adly, A.A.; Abd-El-Hafiz, S.K.

    2008-01-01

    Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented

  5. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  6. A Murine Model of Candida glabrata Vaginitis Shows No Evidence of an Inflammatory Immunopathogenic Response.

    Directory of Open Access Journals (Sweden)

    Evelyn E Nash

    Full Text Available Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC, particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans. Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH, and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.

  7. A Murine Model of Candida glabrata Vaginitis Shows No Evidence of an Inflammatory Immunopathogenic Response.

    Science.gov (United States)

    Nash, Evelyn E; Peters, Brian M; Lilly, Elizabeth A; Noverr, Mairi C; Fidel, Paul L

    2016-01-01

    Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC), particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans). Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN) recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH), and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.

  8. Modeling and energy efficiency optimization of belt conveyors

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2011-01-01

    Highlights: → We take optimization approach to improve operation efficiency of belt conveyors. → An analytical energy model, originating from ISO 5048, is proposed. → Then an off-line and an on-line parameter estimation schemes are investigated. → In a case study, six optimization problems are formulated with solutions in simulation. - Abstract: The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment and operation levels. Specifically, variable speed control, an equipment level intervention, is recommended to improve operation efficiency of belt conveyors. However, the current implementations mostly focus on lower level control loops without operational considerations at the system level. This paper intends to take a model based optimization approach to improve the efficiency of belt conveyors at the operational level. An analytical energy model, originating from ISO 5048, is firstly proposed, which lumps all the parameters into four coefficients. Subsequently, both an off-line and an on-line parameter estimation schemes are applied to identify the new energy model, respectively. Simulation results are presented for the estimates of the four coefficients. Finally, optimization is done to achieve the best operation efficiency of belt conveyors under various constraints. Six optimization problems of a typical belt conveyor system are formulated, respectively, with solutions in simulation for a case study.

  9. Global thermal niche models of two European grasses show high invasion risks in Antarctica.

    Science.gov (United States)

    Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel

    2017-07-01

    The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species. © 2017 John Wiley & Sons Ltd.

  10. ASIC1a Deficient Mice Show Unaltered Neurodegeneration in the Subacute MPTP Model of Parkinson Disease.

    Directory of Open Access Journals (Sweden)

    Daniel Komnig

    Full Text Available Inflammation contributes to the death of dopaminergic neurons in Parkinson disease and can be accompanied by acidification of extracellular pH, which may activate acid-sensing ion channels (ASIC. Accordingly, amiloride, a non-selective inhibitor of ASIC, was protective in an acute 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP mouse model of Parkinson disease. To complement these findings we determined MPTP toxicity in mice deficient for ASIC1a, the most common ASIC isoform in neurons. MPTP was applied i.p. in doses of 30 mg per kg on five consecutive days. We determined the number of dopaminergic neurons in the substantia nigra, assayed by stereological counting 14 days after the last MPTP injection, the number of Nissl positive neurons in the substantia nigra, and the concentration of catecholamines in the striatum. There was no difference between ASIC1a-deficient mice and wildtype controls. We are therefore not able to confirm that ASIC1a are involved in MPTP toxicity. The difference might relate to the subacute MPTP model we used, which more closely resembles the pathogenesis of Parkinson disease, or to further targets of amiloride.

  11. Progesterone treatment shows benefit in a pediatric model of moderate to severe bilateral brain injury.

    Directory of Open Access Journals (Sweden)

    Rastafa I Geddes

    Full Text Available Controlled cortical impact (CCI models in adult and aged Sprague-Dawley (SD rats have been used extensively to study medial prefrontal cortex (mPFC injury and the effects of post-injury progesterone treatment, but the hormone's effects after traumatic brain injury (TBI in juvenile animals have not been determined. In the present proof-of-concept study we investigated whether progesterone had neuroprotective effects in a pediatric model of moderate to severe bilateral brain injury.Twenty-eight-day old (PND 28 male Sprague Dawley rats received sham (n = 24 or CCI (n = 47 injury and were given progesterone (4, 8, or 16 mg/kg per 100 g body weight or vehicle injections on post-injury days (PID 1-7, subjected to behavioral testing from PID 9-27, and analyzed for lesion size at PID 28.The 8 and 16 mg/kg doses of progesterone were observed to be most beneficial in reducing the effect of CCI on lesion size and behavior in PND 28 male SD rats.Our findings suggest that a midline CCI injury to the frontal cortex will reliably produce a moderate TBI comparable to what is seen in the adult male rat and that progesterone can ameliorate the injury-induced deficits.

  12. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...

  13. A zebrafish model of glucocorticoid resistance shows serotonergic modulation of the stress response

    Directory of Open Access Journals (Sweden)

    Brian eGriffiths

    2012-10-01

    Full Text Available One function of glucocorticoids is to restore homeostasis after an acute stress response by providing negative feedback to stress circuits in the brain. Loss of this negative feedback leads to elevated physiological stress and may contribute to depression, anxiety and post-traumatic stress disorder. We investigated the early, developmental effects of glucocorticoid signaling deficits on stress physiology and related behaviors using a mutant zebrafish, grs357, with non-functional glucocorticoid receptors. These mutants are morphologically inconspicuous and adult-viable. A previous study of adult grs357 mutants showed loss of glucocorticoid-mediated negative feedback and elevated physiological and behavioral stress markers. Already at five days post-fertilization, mutant larvae had elevated whole body cortisol, increased expression of pro-opiomelanocortin (POMC, the precursor of adrenocorticotropic hormone (ACTH, and failed to show normal suppression of stress markers after dexamethasone treatment. Mutant larvae had larger auditory-evoked startle responses compared to wildtype sibling controls (grwt, despite having lower spontaneous activity levels. Fluoxetine (Prozac treatment in mutants decreased startle responding and increased spontaneous activity, making them behaviorally similar to wildtype. This result mirrors known effects of selective serotonin reuptake inhibitors (SSRIs in modifying glucocorticoid signaling and alleviating stress disorders in human patients. Our results suggest that larval grs357 zebrafish can be used to study behavioral, physiological and molecular aspects of stress disorders. Most importantly, interactions between glucocorticoid and serotonin signaling appear to be highly conserved among vertebrates, suggesting deep homologies at the neural circuit level and opening up new avenues for research into psychiatric conditions.

  14. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  15. EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS

    Directory of Open Access Journals (Sweden)

    Péter Bihari

    2010-01-01

    Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the

  16. Metabolic remodeling agents show beneficial effects in the dystrophin-deficient mdx mouse model

    Directory of Open Access Journals (Sweden)

    Jahnke Vanessa E

    2012-08-01

    Full Text Available Abstract Background Duchenne muscular dystrophy is a genetic disease involving a severe muscle wasting that is characterized by cycles of muscle degeneration/regeneration and culminates in early death in affected boys. Mitochondria are presumed to be involved in the regulation of myoblast proliferation/differentiation; enhancing mitochondrial activity with exercise mimetics (AMPK and PPAR-delta agonists increases muscle function and inhibits muscle wasting in healthy mice. We therefore asked whether metabolic remodeling agents that increase mitochondrial activity would improve muscle function in mdx mice. Methods Twelve-week-old mdx mice were treated with two different metabolic remodeling agents (GW501516 and AICAR, separately or in combination, for 4 weeks. Extensive systematic behavioral, functional, histological, biochemical, and molecular tests were conducted to assess the drug(s' effects. Results We found a gain in body and muscle weight in all treated mice. Histologic examination showed a decrease in muscle inflammation and in the number of fibers with central nuclei and an increase in fibers with peripheral nuclei, with significantly fewer activated satellite cells and regenerating fibers. Together with an inhibition of FoXO1 signaling, these results indicated that the treatments reduced ongoing muscle damage. Conclusions The three treatments produced significant improvements in disease phenotype, including an increase in overall behavioral activity and significant gains in forelimb and hind limb strength. Our findings suggest that triggering mitochondrial activity with exercise mimetics improves muscle function in dystrophin-deficient mdx mice.

  17. Male Wistar rats show individual differences in an animal model of conformity.

    Science.gov (United States)

    Jolles, Jolle W; de Visser, Leonie; van den Bos, Ruud

    2011-09-01

    Conformity refers to the act of changing one's behaviour to match that of others. Recent studies in humans have shown that individual differences exist in conformity and that these differences are related to differences in neuronal activity. To understand the neuronal mechanisms in more detail, animal tests to assess conformity are needed. Here, we used a test of conformity in rats that has previously been evaluated in female, but not male, rats and assessed the nature of individual differences in conformity. Male Wistar rats were given the opportunity to learn that two diets differed in palatability. They were subsequently exposed to a demonstrator that had consumed the less palatable food. Thereafter, they were exposed to the same diets again. Just like female rats, male rats decreased their preference for the more palatable food after interaction with demonstrator rats that had eaten the less palatable food. Individual differences existed for this shift, which were only weakly related to an interaction between their own initial preference and the amount consumed by the demonstrator rat. The data show that this conformity test in rats is a promising tool to study the neurobiology of conformity.

  18. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  19. Energetics and efficiency of a molecular motor model

    International Nuclear Information System (INIS)

    Fogedby, Hans C; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al are analyzed from an analytical point of view. The model, which is based on protein friction with a track, is described by coupled Langevin equations for the motion in combination with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor are addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10% in a heuristic description of the motor to about 1 per mille when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action. (paper)

  20. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Directory of Open Access Journals (Sweden)

    Simona Jeners

    2013-06-01

    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  1. Modeling serotonin uptake in the lung shows endothelial transporters dominate over cleft permeation

    Science.gov (United States)

    Bassingthwaighte, James B.

    2013-01-01

    A four-region (capillary plasma, endothelium, interstitial fluid, cell) multipath model was configured to describe the kinetics of blood-tissue exchange for small solutes in the lung, accounting for regional flow heterogeneity, permeation of cell membranes and through interendothelial clefts, and intracellular reactions. Serotonin uptake data from the Multiple indicator dilution “bolus sweep” experiments of Rickaby and coworkers (Rickaby DA, Linehan JH, Bronikowski TA, Dawson CA. J Appl Physiol 51: 405–414, 1981; Rickaby DA, Dawson CA, and Linehan JH. J Appl Physiol 56: 1170–1177, 1984) and Malcorps et al. (Malcorps CM, Dawson CA, Linehan JH, Bronikowski TA, Rickaby DA, Herman AG, Will JA. J Appl Physiol 57: 720–730, 1984) were analyzed to distinguish facilitated transport into the endothelial cells (EC) and the inhibition of tracer transport by nontracer serotonin in the bolus of injectate from the free uninhibited permeation through the clefts into the interstitial fluid space. The permeability-surface area products (PS) for serotonin via the inter-EC clefts were ∼0.3 ml·g−1·min−1, low compared with the transporter-mediated maximum PS of 13 ml·g−1·min−1 (with Km = ∼0.3 μM and Vmax = ∼4 nmol·g−1·min−1). The estimates of serotonin PS values for EC transporters from their multiple data sets were similar and were influenced only modestly by accounting for the cleft permeability in parallel. The cleft PS estimates in these Ringer-perfused lungs are less than half of those for anesthetized dogs (Yipintsoi T. Circ Res 39: 523–531, 1976) with normal hematocrits, but are compatible with passive noncarrier-mediated transport observed later in the same laboratory (Dawson CA, Linehan JH, Rickaby DA, Bronikowski TA. Ann Biomed Eng 15: 217–227, 1987; Peeters FAM, Bronikowski TA, Dawson CA, Linehan JH, Bult H, Herman AG. J Appl Physiol 66: 2328–2337, 1989) The identification and quantitation of the cleft pathway conductance from these

  2. Efficient Use of Preisach Hysteresis Model in Computer Aided Design

    Directory of Open Access Journals (Sweden)

    IONITA, V.

    2013-05-01

    Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.

  3. Receipts Assay Monitor: deadtime correction model and efficiency profile

    International Nuclear Information System (INIS)

    Weingardt, J.J.; Stewart, J.E.

    1986-08-01

    Experiments were performed at Los Alamos National Laboratory to characterize the operating parameters and flatten the axial efficiency profile of a neutron coincidence counter called the Receipts Assay Monitor (RAM). Optimum electronic settings determined by conventional methods included operating voltage (1680 V) and gate width (64 μs). Also determined were electronic characteristics such as bias and deadtime. Neutronic characteristics determined using a 252 Cf neutron source included axial efficiency profiles and axial die-away time profiles. The RAM electronics showed virtually no bias for coincidence count rate; it was measured as -4.6 x 10 -5 % with a standard deviation of 3.3 x 10 -4 %. Electronic deadtime was measured by two methods. The first method expresses the coincidence-rate deadtime as a linear function of the measured totals rate, and the second method treats deadtime as a constant. Initially, axial coincidence efficiency profiles yielded normalized efficiencies at the bottom and top of a 17-in. mockup UF 6 sample of 68.9% and 40.4%, respectively, with an average relative efficiency across the sample of 86.1%. Because the nature of the measurements performed with the RAM favors a much flatter efficiency profile, 3-mil cadmium sheets were wrapped around the 3 He tubes in selected locations to flatten the efficiency profile. Use of the cadmium sheets resulted in relative coincidence efficiencies at the bottom and top of the sample of 82.3% and 57.4%, respectively, with an average relative efficiency of 93.5%

  4. Operator-based linearization for efficient modeling of geothermal processes

    OpenAIRE

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical issue. Geothermal reservoir modeling requires the solution of governing equations describing the conservation of mass and energy. The robust, accurate and computationally efficient implementation of ...

  5. Energetics and efficiency of a molecular motor model

    DEFF Research Database (Denmark)

    C. Fogedby, Hans; Svane, Axel

    2013-01-01

    The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination...... when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action....

  6. Mathematical modelling of a steam boiler room to research thermal efficiency

    International Nuclear Information System (INIS)

    Bujak, J.

    2008-01-01

    This paper introduces a mathematical model of a boiler room to research its thermal efficiency. The model is regarded as an open thermodynamic system exchanging mass, energy, and heat with the atmosphere. On those grounds, the energy and energy balance were calculated. Here I show several possibilities concerning how this model may be applied. Test results of the coefficient of thermal efficiency were compared to a real object, i.e. a steam boiler room of the Provincial Hospital in Wloclawek (Poland). The tests were carried out for 18 months. The results obtained in the boiler room were used for verification of the mathematical model

  7. A Network DEA Model with Super Efficiency and Undesirable Outputs: An Application to Bank Efficiency in China

    Directory of Open Access Journals (Sweden)

    Jianhuan Huang

    2014-01-01

    Full Text Available There are two typical subprocesses in bank production—deposit generation and loan generation. Aiming to open the black box of input-output production of banks and provide comprehensive and accurate assessment on the efficiency of each stage, this paper proposes a two-stage network model with bad outputs and supper efficiency (US-NSBM. Empirical comparisons show that the US-NSBM may be promising and practical for taking the nonperforming loans into account and being able to rank all samples. Applying it to measure the efficiency of Chinese commercial banks from 2008 to 2012, this paper explores the characteristics of overall and divisional efficiency, as well as the determinants of them. Some interesting results are discovered. The polarization of efficiency occurs in the bank level and deposit generation, yet does not in the loan generation. Five hypotheses work as expected in the bank level, but not all of them are supported in the stage level. Our results extend and complement some earlier empirical publications in the bank level.

  8. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  9. Mathematics model of filtration efficiency of moisture separator for nuclear reactors

    International Nuclear Information System (INIS)

    Zhang Zhenzhong; Jiang Feng; Huang Yunfeng

    2010-01-01

    In order to study the filtration mechanism of the moisture separator for water droplet of 5∼10μm, this paper set up a physical model. For the mixed meshes, they can be classified into three types: standard meshes, bur meshes and middle meshes. For all fibers of the wire meshes and vertical fibers of standard mixed meshes, a Kuwabara flow field is used to track the particle to get the inertial impaction efficiency and then calculate the total filtration efficiency of the meshes. For other fibers, besides the Kuwabara flow field, an around-flat flow field is added to calculate the efficiency. Lastly, the total efficiency of the moisture separator according to the equation of the filtration efficiency for the filters in series is compared with the experimental data. The result shows that, under the standard condition,the calculation value is consistent with the experimental efficiency data. (authors)

  10. Crop modelling and water use efficiency of protected cucumber

    International Nuclear Information System (INIS)

    El Moujabber, M.; Atallah, Th.; Darwish, T.

    2002-01-01

    Crop modelling is considered an essential tool of planning. The automation of irrigation scheduling using crop models would contribute to an optimisation of water and fertiliser use of protected crops. To achieve this purpose, two experiments were carried. The first one aimed at determining water requirements and irrigation scheduling using climatic data. The second experiment was to establish the influence of irrigation interval and fertigation regime on water use efficiency. The results gave a simple model for the determination of the water requirements of protected cucumber by the use of climatic data: ETc=K* Ep. K and Ep are calculated using climatic data outside the greenhouse. As for water use efficiency, the second experiment highlighted the fact that a high frequency and continuous feeding are highly recommended for maximising yield. (author)

  11. Modeling of detective quantum efficiency considering scatter-reduction devices

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.

  12. An efficient energy response model for liquid scintillator detectors

    Science.gov (United States)

    Lebanowski, Logan; Wan, Linyan; Ji, Xiangpan; Wang, Zhe; Chen, Shaomin

    2018-05-01

    Liquid scintillator detectors are playing an increasingly important role in low-energy neutrino experiments. In this article, we describe a generic energy response model of liquid scintillator detectors that provides energy estimations of sub-percent accuracy. This model fits a minimal set of physically-motivated parameters that capture the essential characteristics of scintillator response and that can naturally account for changes in scintillator over time, helping to avoid associated biases or systematic uncertainties. The model employs a one-step calculation and look-up tables, yielding an immediate estimation of energy and an efficient framework for quantifying systematic uncertainties and correlations.

  13. Building Information Model: advantages, tools and adoption efficiency

    Science.gov (United States)

    Abakumov, R. G.; Naumov, A. E.

    2018-03-01

    The paper expands definition and essence of Building Information Modeling. It describes content and effects from application of Information Modeling at different stages of a real property item. Analysis of long-term and short-term advantages is given. The authors included an analytical review of Revit software package in comparison with Autodesk with respect to: features, advantages and disadvantages, cost and pay cutoff. A prognostic calculation is given for efficiency of adoption of the Building Information Modeling technology, with examples of its successful adoption in Russia and worldwide.

  14. KEEFEKTIFAN MODEL SHOW NOT TELL DAN MIND MAP PADA PEMBELAJARAN MENULIS TEKS EKSPOSISI BERDASARKAN MINAT PESERTA DIDIK KELAS X SMK

    Directory of Open Access Journals (Sweden)

    Wiwit Lili Sokhipah

    2015-03-01

    Full Text Available Tujuan penelitian ini adalah (1 menentukan keefektifan penggunaan model show not tell pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK Kelas X, (2 menentukan keefektifan penggunaan model mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X, (3 menentukan keefektifan interaksi show not tell dan mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X. Penelitian ini adalah quasi experimental design (pretes-postes control group design. Dalam desain ini terdapat dua kelompok eksperimen yakni penerapan model show not tell dalam pembelajaran keterampilan menulis teks eksposisipeserta didik dengan minat tinggi dan penerapan model mind map dalam pembelajaran keterampilan menulis teks eksposisi  peserta didik dengan minat rendah. Hasil penelitian adalah (1 model show not tell efektif digunakan  dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, (2 model mind map efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat rendah, dan (3 model show not tell lebih efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, sedangkan model mind map efektif digunakan dalam membelajarkan teks eksposisi pagi peserta didik yang memiliki minat rendah.

  15. An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengwang Ye

    2017-01-01

    Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.

  16. Energy efficiency model for small/medium geothermal heat pump systems

    Directory of Open Access Journals (Sweden)

    Staiger Robert

    2015-06-01

    Full Text Available Heating application efficiency is a crucial point for saving energy and reducing greenhouse gas emissions. Today, EU legal framework conditions clearly define how heating systems should perform, how buildings should be designed in an energy efficient manner and how renewable energy sources should be used. Using heat pumps (HP as an alternative “Renewable Energy System” could be one solution for increasing efficiency, using less energy, reducing the energy dependency and reducing greenhouse gas emissions. This scientific article will take a closer look at the different efficiency dependencies of such geothermal HP (GHP systems for domestic buildings (small/medium HP. Manufacturers of HP appliances must document the efficiency, so called COP (Coefficient of Performance in the EU under certain standards. In technical datasheets of HP appliances, these COP parameters give a clear indication of the performance quality of a HP device. HP efficiency (COP and the efficiency of a working HP system can vary significantly. For this reason, an annual efficiency statistic named “Seasonal Performance Factor” (SPF has been defined to get an overall efficiency for comparing HP Systems. With this indicator, conclusions can be made from an installation, economy, environmental, performance and a risk point of view. A technical and economic HP model shows the dependence of energy efficiency problems in HP systems. To reduce the complexity of the HP model, only the important factors for efficiency dependencies are used. Dynamic and static situations with HP´s and their efficiency are considered. With the latest data from field tests of HP Systems and the practical experience over the last 10 years, this information will be compared with one of the latest simulation programs with the help of two practical geothermal HP system calculations. With the result of the gathered empirical data, it allows for a better estimate of the HP system efficiency, their

  17. The technology gap and efficiency measure in WEC countries: Application of the hybrid meta frontier model

    International Nuclear Information System (INIS)

    Chiu, Yung-Ho; Lee, Jen-Hui; Lu, Ching-Cheng; Shyu, Ming-Kuang; Luo, Zhengying

    2012-01-01

    This study develops the hybrid meta frontier DEA model for which inputs are distinguished into radial inputs that change proportionally and non-radial inputs that change non-proportionally, in order to measure the technical efficiency and technology gap ratios (TGR) of four different regions: Asia, Africa, America, and Europe. This paper selects 87 countries that are members of the World Energy Council from 2005 to 2007. The input variables are industry and population, while the output variances are gross domestic product (GDP) and the amount of fossil-fuel CO 2 emissions. The result shows that countries’ efficiency ranking among their own region presents more implied volatility. In view of the Technology Gap Ratio, Europe is the most efficient of any region, but during the same period, Asia has a lower efficiency than other regions. Finally, regions with higher industry (or GDP) might not have higher efficiency from 2005 to 2007. And higher CO 2 emissions or population also might not mean lower efficiency for other regions. In addition, Brazil is not OECD member, but it is higher efficiency than other OECD members in emerging countries case. OECD countries are better efficiency than non-OECD countries and Europe is higher than Asia to control CO 2 emissions. If non-OECD countries or Asia countries could reach the best efficiency score, they should try to control CO 2 emissions. - Highlights: ► The new meta frontier Model for evaluating the efficiency and technology gap ratios. ► Higher CO 2 emissions might not lower efficiency than any other regions, like Europe. ► Asia’s output and CO 2 emissions simultaneously increased and lower of its efficiency. ► Non-OECD or Asia countries should control CO 2 emissions to reach best efficiency score.

  18. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  19. Investigation on the Efficiency of Financial Companies in Malaysia with Data Envelopment Analysis Model

    Science.gov (United States)

    Weng Siew, Lam; Kah Fai, Liew; Weng Hoe, Lam

    2018-04-01

    Financial ratio and risk are important financial indicators to evaluate the financial performance or efficiency of the companies. Therefore, financial ratio and risk factor are needed to be taken into consideration to evaluate the efficiency of the companies with Data Envelopment Analysis (DEA) model. In DEA model, the efficiency of the company is measured as the ratio of sum-weighted outputs to sum-weighted inputs. The objective of this paper is to propose a DEA model by incorporating the financial ratio and risk factor in evaluating and comparing the efficiency of the financial companies in Malaysia. In this study, the listed financial companies in Malaysia from year 2004 until 2015 are investigated. The results of this study show that AFFIN, ALLIANZ, APEX, BURSA, HLCAP, HLFG, INSAS, LPI, MNRB, OSK, PBBANK, RCECAP and TA are ranked as efficient companies. This implies that these efficient companies have utilized their resources or inputs optimally to generate the maximum outputs. This study is significant because it helps to identify the efficient financial companies as well as determine the optimal input and output weights in maximizing the efficiency of financial companies in Malaysia.

  20. FIRE BEHAVIOR PREDICTING MODELS EFFICIENCY IN BRAZILIAN COMMERCIAL EUCALYPT PLANTATIONS

    Directory of Open Access Journals (Sweden)

    Benjamin Leonardo Alves White

    2016-12-01

    Full Text Available Knowing how a wildfire will behave is extremely important in order to assist in fire suppression and prevention operations. Since the 1940’s mathematical models to estimate how the fire will behave have been developed worldwide, however, none of them, until now, had their efficiency tested in Brazilian commercial eucalypt plantations nor in other vegetation types in the country. This study aims to verify the accuracy of the Rothermel (1972 fire spread model, the Byram (1959 flame length model, and the fire spread and length equations derived from the McArthur (1962 control burn meters. To meet these objectives, 105 experimental laboratory fires were done and their results compared with the predicted values from the models tested. The Rothermel and Byram models predicted better than McArthur’s, nevertheless, all of them underestimated the fire behavior aspects evaluated and were statistically different from the experimental data.

  1. Radially dependent photopeak efficiency model for Si(Li) detectors

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D [Australian Inst. of Nuclear Science and Engineering, Lucas Heights

    1980-12-15

    A simple five parameter model for the efficiency of a Si(Li) detector has been developed. It was found necessary to include a radially dependent efficiency even for small detectors. The model is an extension of the pioneering work of Hansen et al. but correction factors include more up to date data and explicit equations for the mass attenuation coefficients over a wide range of photons energies. Four of the five parameters needed are generally supplied by most commercial manufacturers of Si(Li) detectors. /sup 54/Mn and /sup 241/Am sources have been used to calibrate a Si(Li) to approx. +-3% over the energy range 3-60 keV.

  2. [Experimental evaluation of the spraying disinfection efficiency on dental models].

    Science.gov (United States)

    Zhang, Yi; Fu, Yuan-fei; Xu, Kan

    2013-08-01

    To evaluate the disinfect effect after spraying a new kind of disinfectant on the dental plaster models. The germ-free plaster samples, which were smeared with bacteria compound including Staphylococcus aureus, Escherichia coli, Saccharomyces albicans, Streptococcus mutans and Actinomyces viscosus were sprayed with disinfectants (CaviCide) and glutaraldehyde individually. In one group(5 minutes later) and another group(15 minutes later), the colonies were counted for statistical analysis after sampling, inoculating, and culturing which were used for evaluation of disinfecting efficiency. ANOVA was performed using SPSS12.0 software package. All sample bacteria were eradicated after spraying disinfectants(CaviCide) within 5 minutes and effective bacteria control was retained after 15 minutes. There was significant difference between the disinfecting efficiency of CaviCide and glutaraldehyde. The effect of disinfection with spraying disinfectants (CaviCide) on dental models is quick and effective.

  3. Investigating market efficiency through a forecasting model based on differential equations

    Science.gov (United States)

    de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco

    2017-05-01

    A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.

  4. Efficient image duplicated region detection model using sequential block clustering

    Czech Academy of Sciences Publication Activity Database

    Sekeh, M. A.; Maarof, M. A.; Rohani, M. F.; Mahdian, Babak

    2013-01-01

    Roč. 10, č. 1 (2013), s. 73-84 ISSN 1742-2876 Institutional support: RVO:67985556 Keywords : Image forensic * Copy–paste forgery * Local block matching Subject RIV: IN - Informatics, Computer Science Impact factor: 0.986, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/mahdian-efficient image duplicated region detection model using sequential block clustering.pdf

  5. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  6. Policy modeling for energy efficiency improvement in US industry

    International Nuclear Information System (INIS)

    Worrell, Ernst; Price, Lynn; Ruth, Michael

    2001-01-01

    We are at the beginning of a process of evaluating and modeling the contribution of policies to improve energy efficiency. Three recent policy studies trying to assess the impact of energy efficiency policies in the United States are reviewed. The studies represent an important step in the analysis of climate change mitigation strategies. All studies model the estimated policy impact, rather than the policy itself. Often the policy impacts are based on assumptions, as the effects of a policy are not certain. Most models only incorporate economic (or price) tools, which recent studies have proven to be insufficient to estimate the impacts, costs and benefits of mitigation strategies. The reviewed studies are a first effort to capture the effects of non-price policies. The studies contribute to a better understanding of the role of policies in improving energy efficiency and mitigating climate change. All policy scenarios results in substantial energy savings compared to the baseline scenario used, as well as substantial net benefits to the U.S. economy

  7. Modelling and design of high efficiency radiation tolerant indium phosphide space solar cells

    International Nuclear Information System (INIS)

    Goradia, C.; Geier, J.V.; Weinberg, I.

    1987-01-01

    Using a fairly comprehensive model, the authors did a parametric variation study of the InP shallow homojunction solar cell with a view to determining the maximum realistically achievable efficiency and an optimum design that would yield this efficiency. Their calculations show that with good quality epitaxial material, a BOL efficiency of about 20.3% at 1AMO, 25 0 C may be possible. The design parameters of the near-optimum cell are given. Also presented are the expected radiation damage of the performance parameters by 1MeV electrons and a possible explanation of the high radiation tolerance of InP solar cells

  8. Economic efficiency versus social equality? The U.S. liberal model versus the European social model.

    Science.gov (United States)

    Navarro, Vicente; Schmitt, John

    2005-01-01

    This article begins by challenging the widely held view in neoliberal discourse that there is a necessary trade-off between higher efficiency and lower reduction of inequalities: the article empirically shows that the liberal, U.S. model has been less efficient economically (slower economic growth, higher unemployment) than the social model in existence in the European Union and in the majority of its member states. Based on the data presented, the authors criticize the adoption of features of the liberal model (such as deregulation of their labor markets, reduction of public social expenditures) by some European governments. The second section analyzes the causes for the slowdown of economic growth and the increase of unemployment in the European Union--that is, the application of monetarist and neoliberal policies in the institutional frame of the European Union, including the Stability Pact, the objectives and modus operandi of the European Central Bank, and the very limited resources available to the European Commission for stimulating and distributive functions. The third section details the reasons for these developments, including (besides historical considerations) the enormous influence of financial capital in the E.U. institutions and the very limited democracy. Proposals for change are included.

  9. Towards an efficient multiphysics model for nuclear reactor dynamics

    Directory of Open Access Journals (Sweden)

    Obaidurrahman K.

    2015-01-01

    Full Text Available Availability of fast computer resources nowadays has facilitated more in-depth modeling of complex engineering systems which involve strong multiphysics interactions. This multiphysics modeling is an important necessity in nuclear reactor safety studies where efforts are being made worldwide to combine the knowledge from all associated disciplines at one place to accomplish the most realistic simulation of involved phenomenon. On these lines coupled modeling of nuclear reactor neutron kinetics, fuel heat transfer and coolant transport is a regular practice nowadays for transient analysis of reactor core. However optimization between modeling accuracy and computational economy has always been a challenging task to ensure the adequate degree of reliability in such extensive numerical exercises. Complex reactor core modeling involves estimation of evolving 3-D core thermal state, which in turn demands an expensive multichannel based detailed core thermal hydraulics model. A novel approach of power weighted coupling between core neutronics and thermal hydraulics presented in this work aims to reduce the bulk of core thermal calculations in core dynamics modeling to a significant extent without compromising accuracy of computation. Coupled core model has been validated against a series of international benchmarks. Accuracy and computational efficiency of the proposed multiphysics model has been demonstrated by analyzing a reactivity initiated transient.

  10. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  11. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  12. Business Models, transparency and efficient stock price formation

    DEFF Research Database (Denmark)

    Nielsen, Christian; Vali, Edward; Hvidberg, Rene

    has an impact on a company's price formation. In this respect, we analysed whether those companies that publish a lot of information that may support a business model description tend to have a more efficient price formation. Next, we turned to our sample of companies, and via interview-based case...... studies, we managed to draw conclusions on how to construct a comprehensible business model description. The business model explains how the company intends to compete in its market, and thus it gives an account of the characteristics that make the company unique. The business model constitutes...... the platform from which the company prepares and unfolds its strategy. In order to explain this platform and its particular qualities to external interested parties, the description must provide a clear and explicit account of the main determinants of the company's value creation and explain how...

  13. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  14. Modeling of hybrid vehicle fuel economy and fuel engine efficiency

    Science.gov (United States)

    Wu, Wei

    "Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.

  15. The composite supply chain efficiency model: A case study of the Sishen-Saldanha supply chain

    Directory of Open Access Journals (Sweden)

    Leila L. Goedhals-Gerber

    2016-01-01

    Full Text Available As South Africa strives to be a major force in global markets, it is essential that South African supply chains achieve and maintain a competitive advantage. One approach to achieving this is to ensure that South African supply chains maximise their levels of efficiency. Consequently, the efficiency levels of South Africa’s supply chains must be evaluated. The objective of this article is to propose a model that can assist South African industries in becoming internationally competitive by providing them with a tool for evaluating their levels of efficiency both as individual firms and as a component in an overall supply chain. The Composite Supply Chain Efficiency Model (CSCEM was developed to measure supply chain efficiency across supply chains using variables identified as problem areas experienced by South African supply chains. The CSCEM is tested in this article using the Sishen-Saldanda iron ore supply chain as a case study. The results indicate that all three links or nodes along the Sishen-Saldanha iron ore supply chain performed well. The average efficiency of the rail leg was 97.34%, while the average efficiency of the mine and the port were 97% and 95.44%, respectively. The results also show that the CSCEM can be used by South African firms to measure their levels of supply chain efficiency. This article concludes with the benefits of the CSCEM.

  16. Energy efficiency of selected OECD countries: A slacks based model with undesirable outputs

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Aye, Goodness C.; Barros, Carlos Pestana; Gupta, Rangan; Wanke, Peter

    2015-01-01

    This paper presents an efficiency assessment of selected OECD countries using a Slacks Based Model with undesirable or bad outputs (SBM-Undesirable). In this research, SBM-Undesirable is used first in a two-stage approach to assess the relative efficiency of OECD countries using the most frequent indicators adopted by the literature on energy efficiency. Besides, in the second stage, GLMM–MCMC methods are combined with SBM-Undesirable results as part of an attempt to produce a model for energy performance with effective predictive ability. The results reveal different impacts of contextual variables, such as economic blocks and capital–labor ratio, on energy efficiency levels. - Highlights: • We analyze the energy efficiency of selected OECD countries. • SBM-Undesirable and MCMC–GLMM are combined for this purpose. • Find that efficiency levels are high but declining over time. • Analysis with contextual variables shows varying efficiency levels across groups. • Capital-intensive countries are more energy efficient than labor-intensive countries.

  17. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  18. An analysis of partial efficiencies of energy utilisation of different macronutrients by barramundi (Lates calcarifer) shows that starch restricts protein utilisation in carnivorous fish.

    Science.gov (United States)

    Glencross, Brett D; Blyth, David; Bourne, Nicholas; Cheers, Susan; Irvin, Simon; Wade, Nicholas M

    2017-02-01

    This study examined the effect of including different dietary proportions of starch, protein and lipid, in diets balanced for digestible energy, on the utilisation efficiencies of dietary energy by barramundi (Lates calcarifer). Each diet was fed at one of three ration levels (satiety, 80 % of initial satiety and 60 % of initial satiety) for a 42-d period. Fish performance measures (weight gain, feed intake and feed conversion ratio) were all affected by dietary energy source. The efficiency of energy utilisation was significantly reduced in fish fed the starch diet relative to the other diets, but there were no significant effects between the other macronutrients. This reduction in efficiency of utilisation was derived from a multifactorial change in both protein and lipid utilisation. The rate of protein utilisation deteriorated as the amount of starch included in the diet increased. Lipid utilisation was most dramatically affected by inclusion levels of lipid in the diet, with diets low in lipid producing component lipid utilisation rates well above 1·3, which indicates substantial lipid synthesis from other energy sources. However, the energetic cost of lipid gain was as low as 0·65 kJ per kJ of lipid deposited, indicating that barramundi very efficiently store energy in the form of lipid, particularly from dietary starch energy. This study defines how the utilisation efficiency of dietary digestible energy by barramundi is influenced by the macronutrient source providing that energy, and that the inclusion of starch causes problems with protein utilisation in this species.

  19. Development of multicriteria models to classify energy efficiency alternatives

    International Nuclear Information System (INIS)

    Neves, Luis Pires; Antunes, Carlos Henggeler; Dias, Luis Candido; Martins, Antonio Gomes

    2005-01-01

    This paper aims at describing a novel constructive approach to develop decision support models to classify energy efficiency initiatives, including traditional Demand-Side Management and Market Transformation initiatives, overcoming the limitations and drawbacks of Cost-Benefit Analysis. A multicriteria approach based on the ELECTRE-TRI method is used, focusing on four perspectives: - an independent Agency with the aim of promoting energy efficiency; - Distribution-only utilities under a regulated framework; - the Regulator; - Supply companies in a competitive liberalized market. These perspectives were chosen after a system analysis of the decision situation regarding the implementation of energy efficiency initiatives, looking for the main roles and power relations, with the purpose of structuring the decision problem by identifying the actors, the decision makers, the decision paradigm, and the relevant criteria. The multicriteria models developed allow considering different kinds of impacts, but avoiding difficult measurements and unit conversions due to the nature of the multicriteria method chosen. The decision is then based on all the significant effects of the initiative, both positive and negative ones, including ancillary effects often forgotten in cost-benefit analysis. The ELECTRE-TRI, as most multicriteria methods, provides to the Decision Maker the ability of controlling the relevance each impact can have on the final decision. The decision support process encompasses a robustness analysis, which, together with a good documentation of the parameters supplied into the model, should support sound decisions. The models were tested with a set of real-world initiatives and compared with possible decisions based on Cost-Benefit analysis

  20. Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%

    KAUST Repository

    Beiley, Zach M.; McGehee, Michael D.

    2012-01-01

    , that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost

  1. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  2. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  3. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  4. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    Science.gov (United States)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  5. Mathematical modeling of efficient protocols to control glioma growth.

    Science.gov (United States)

    Branco, J R; Ferreira, J A; de Oliveira, Paula

    2014-09-01

    In this paper we propose a mathematical model to describe the evolution of glioma cells taking into account the viscoelastic properties of brain tissue. The mathematical model is established considering that the glioma cells are of two phenotypes: migratory and proliferative. The evolution of the migratory cells is described by a diffusion-reaction equation of non Fickian type deduced considering a mass conservation law with a non Fickian migratory mass flux. The evolution of the proliferative cells is described by a reaction equation. A stability analysis that leads to the design of efficient protocols is presented. Numerical simulations that illustrate the behavior of the mathematical model are included. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. An Application on Merton Model in the Non-efficient Market

    Science.gov (United States)

    Feng, Yanan; Xiao, Qingxian

    Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.

  7. Models for electricity market efficiency and bidding strategy analysis

    Science.gov (United States)

    Niu, Hui

    This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have

  8. Tool Efficiency Analysis model research in SEMI industry

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2018-01-01

    Full Text Available One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states,and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  9. An Efficient Null Model for Conformational Fluctuations in Proteins

    DEFF Research Database (Denmark)

    Harder, Tim Philipp; Borg, Mikael; Bottaro, Sandro

    2012-01-01

    Protein dynamics play a crucial role in function, catalytic activity, and pathogenesis. Consequently, there is great interest in computational methods that probe the conformational fluctuations of a protein. However, molecular dynamics simulations are computationally costly and therefore are often...... limited to comparatively short timescales. TYPHON is a probabilistic method to explore the conformational space of proteins under the guidance of a sophisticated probabilistic model of local structure and a given set of restraints that represent nonlocal interactions, such as hydrogen bonds or disulfide...... on conformational fluctuations that is in correspondence with experimental measurements. TYPHON provides a flexible, yet computationally efficient, method to explore possible conformational fluctuations in proteins....

  10. Thermal Efficiency Degradation Diagnosis Method Using Regression Model

    International Nuclear Information System (INIS)

    Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol

    2011-01-01

    This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant

  11. A traction control strategy with an efficiency model in a distributed driving electric vehicle.

    Science.gov (United States)

    Lin, Cheng; Cheng, Xingqun

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention.

  12. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    Science.gov (United States)

    Lin, Cheng

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697

  13. Efficient dynamic modeling of manipulators containing closed kinematic loops

    Science.gov (United States)

    Ferretti, Gianni; Rocco, Paolo

    An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.

  14. Contractual Efficiency of PPP Infrastructure Projects: An Incomplete Contract Model

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available This study analyses the contractual efficiency of public-private partnership (PPP infrastructure projects, with a focus on two financial aspects: the nonrecourse principal and incompleteness of debt contracts. The nonrecourse principal releases the sponsoring companies from the debt contract when the special purpose vehicle (SPV established by the sponsoring companies falls into default. Consequently, all obligations under the debt contract are limited to the liability of the SPV following its default. Because the debt contract is incomplete, a renegotiation of an additional loan between the bank and the SPV might occur to enable project continuation or liquidation, which in turn influences the SPV’s ex ante strategies (moral hazard. Considering these two financial features of PPP infrastructure projects, this study develops an incomplete contract model to investigate how the renegotiation triggers ex ante moral hazard and ex post inefficient liquidation. We derive equilibrium strategies under service fees endogenously determined via bidding and examine the effect of equilibrium strategies on contractual efficiency. Finally, we propose an optimal combination of a performance guarantee, the government’s termination right, and a service fee to improve the contractual efficiency of PPP infrastructure projects.

  15. An efficient Trojan delivery of tetrandrine by poly(N-vinylpyrrolidone-block-poly(ε-caprolactone (PVP-b-PCL nanoparticles shows enhanced apoptotic induction of lung cancer cells and inhibition of its migration and invasion

    Directory of Open Access Journals (Sweden)

    Xu H

    2013-12-01

    Full Text Available Huae Xu,1,2 Zhibo Hou,3 Hao Zhang,4 Hui Kong,2 Xiaolin Li,4 Hong Wang,2 Weiping Xie21Department of Pharmacy, 2Department of Respiratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People's Republic of China; 3First Department of Respiratory Medicine, Nanjing Chest Hospital, Nanjing, People's Republic of China; 4Department of Geriatric Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People's Republic of ChinaAbstract: Earlier studies have demonstrated the promising antitumor effect of tetrandrine (Tet against a series of cancers. However, the poor solubility of Tet limits its application, while its hydrophobicity makes Tet a potential model drug for nanodelivery systems. We report on a simple way of preparing drug-loaded nanoparticles formed by amphiphilic poly(N-vinylpyrrolidone-block-poly(ε-caprolactone (PVP-b-PCL copolymers with Tet as a model drug. The mean diameters of Tet-loaded PVP-b-PCL nanoparticles (Tet-NPs were between 110 nm and 125 nm with a negative zeta potential slightly below 0 mV. Tet was incorporated into PVP-b-PCL nanoparticles with high loading efficiency. Different feeding ratios showed different influences on sizes, zeta potentials, and the drug loading efficiencies of Tet-NPs. An in vitro release study shows the sustained release pattern of Tet-NPs. It is shown that the uptake of Tet-NPs is mainly mediated by the endocytosis of nanoparticles, which is more efficient than the filtration of free Tet. Further experiments including fluorescence activated cell sorting and Western blotting indicated that this Trojan strategy of delivering Tet in PVP-b-PCL nanoparticles via endocytosis leads to enhanced induction of apoptosis in the non-small cell lung cancer cell A549 line; enhanced apoptosis is achieved by inhibiting the expression of anti-apoptotic Bcl-2 and Bcl-xL proteins. Moreover, Tet-NPs more efficiently inhibit the ability of cell migration and

  16. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  17. Efficient Modeling and Migration in Anisotropic Media Based on Prestack Exploding Reflector Model and Effective Anisotropy

    KAUST Repository

    Wang, Hui

    2014-01-01

    This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth's subsurface requires

  18. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    Science.gov (United States)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  19. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    Science.gov (United States)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  20. Plectasin shows intracellular activity against Staphylococcus aureus in human THP-1 monocytes and in a mouse peritonitis model

    DEFF Research Database (Denmark)

    Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre

    2009-01-01

    was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...

  1. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DEFF Research Database (Denmark)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    2015-01-01

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear...... two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...

  2. Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.

    Directory of Open Access Journals (Sweden)

    Zhiwen Yu

    Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.

  3. Electrodynamical Model of Quasi-Efficient Financial Markets

    Science.gov (United States)

    Ilinski, Kirill N.; Stepanenko, Alexander S.

    The modelling of financial markets presents a problem which is both theoretically challenging and practically important. The theoretical aspects concern the issue of market efficiency which may even have political implications [1], whilst the practical side of the problem has clear relevance to portfolio management [2] and derivative pricing [3]. Up till now all market models contain "smart money" traders and "noise" traders whose joint activity constitutes the market [4, 5]. On a short time scale this traditional separation does not seem to be realistic, and is hardly acceptable since all high-frequency market participants are professional traders and cannot be separated into "smart" and "noisy." In this paper we present a "microscopic" model with homogenuous quasi-rational behaviour of traders, aiming to describe short time market behaviour. To construct the model we use an analogy between "screening" in quantum electrodynamics and an equilibration process in a market with temporal mispricing [6, 7]. As a result, we obtain the time-dependent distribution function of the returns which is in quantitative agreement with real market data and obeys the anomalous scaling relations recently reported for both high-frequency exchange rates [8], S&P500 [9] and other stock market indices [10, 11].

  4. Experimental results showing the internal three-component velocity field and outlet temperature contours for a model gas turbine combustor

    CSIR Research Space (South Africa)

    Meyers, BC

    2011-09-01

    Full Text Available by the American Institute of Aeronautics and Astronautics Inc. All rights reserved ISABE-2011-1129 EXPERIMENTAL RESULTS SHOWING THE INTERNAL THREE-COMPONENT VELOCITY FIELD AND OUTLET TEMPERATURE CONTOURS FOR A MODEL GAS TURBINE COMBUSTOR BC Meyers*, GC... identifier c Position identifier F Fuel i Index L (Combustor) Liner OP Orifice plate Introduction There are often inconsistencies when comparing experimental and Computational Fluid Dynamics (CFD) simulations for gas turbine combustors [1...

  5. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  6. A Computational Model of Pattern Separation Efficiency in the Dentate Gyrus with Implications in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Faramarz eFaghihi

    2015-03-01

    Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.

  7. A computational model of pattern separation efficiency in the dentate gyrus with implications in schizophrenia

    Science.gov (United States)

    Faghihi, Faramarz; Moustafa, Ahmed A.

    2015-01-01

    Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189

  8. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  9. Integer Representations towards Efficient Counting in the Bit Probe Model

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet

    2011-01-01

    Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n  − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...

  10. A CAD model for energy efficient offshore structures for desalination and energy generation

    Directory of Open Access Journals (Sweden)

    R. Rahul Dev,

    2016-09-01

    Full Text Available This paper presents a ‘Computer Aided Design (CAD’ model for energy efficient design of offshore structures. In the CAD model preliminary dimensions and geometric details of an offshore structure (i.e. semi-submersible are optimized to achieve a favorable range of motion to reduce the energy consumed by the ‘Dynamic Position System (DPS’. The presented model allows the designer to select the configuration satisfying the user requirements and integration of Computer Aided Design (CAD and Computational Fluid Dynamics (CFD. The integration of CAD with CFD computes a hydrodynamically and energy efficient hull form. Our results show that the implementation of the present model results into an design that can serve the user specified requirements with less cost and energy consumption.

  11. Food pattern modeling shows that the 2010 Dietary Guidelines for sodium and potassium cannot be met simultaneously

    Science.gov (United States)

    Maillot, Matthieu; Monsivais, Pablo; Drewnowski, Adam

    2013-01-01

    The 2010 US Dietary Guidelines recommended limiting intake of sodium to 1500 mg/d for people older than 50 years, African Americans, and those suffering from chronic disease. The guidelines recommended that all other people consume less than 2300 mg sodium and 4700 mg of potassium per day. The theoretical feasibility of meeting the sodium and potassium guidelines while simultaneously maintaining nutritional adequacy of the diet was tested using food pattern modeling based on linear programming. Dietary data from the National Health and Nutrition Examination Survey 2001-2002 were used to create optimized food patterns for 6 age-sex groups. Linear programming models determined the boundary conditions for the potassium and sodium content of the modeled food patterns that would also be compatible with other nutrient goals. Linear programming models also sought to determine the amounts of sodium and potassium that both would be consistent with the ratio of Na to K of 0.49 and would cause the least deviation from the existing food habits. The 6 sets of food patterns were created before and after an across-the-board 10% reduction in sodium content of all foods in the Food and Nutrition Database for Dietary Studies. Modeling analyses showed that the 2010 Dietary Guidelines for sodium were incompatible with potassium guidelines and with nutritionally adequate diets, even after reducing the sodium content of all US foods by 10%. Feasibility studies should precede or accompany the issuing of dietary guidelines to the public. PMID:23507224

  12. Building an Efficient Model for Afterburn Energy Release

    Energy Technology Data Exchange (ETDEWEB)

    Alves, S; Kuhl, A; Najjar, F; Tringe, J; McMichael, L; Glascoe, L

    2012-02-03

    Many explosives will release additional energy after detonation as the detonation products mix with the ambient environment. This additional energy release, referred to as afterburn, is due to combustion of undetonated fuel with ambient oxygen. While the detonation energy release occurs on a time scale of microseconds, the afterburn energy release occurs on a time scale of milliseconds with a potentially varying energy release rate depending upon the local temperature and pressure. This afterburn energy release is not accounted for in typical equations of state, such as the Jones-Wilkins-Lee (JWL) model, used for modeling the detonation of explosives. Here we construct a straightforward and efficient approach, based on experiments and theory, to account for this additional energy release in a way that is tractable for large finite element fluid-structure problems. Barometric calorimeter experiments have been executed in both nitrogen and air environments to investigate the characteristics of afterburn for C-4 and other materials. These tests, which provide pressure time histories, along with theoretical and analytical solutions provide an engineering basis for modeling afterburn with numerical hydrocodes. It is toward this end that we have constructed a modified JWL equation of state to account for afterburn effects on the response of structures to blast. The modified equation of state includes a two phase afterburn energy release to represent variations in the energy release rate and an afterburn energy cutoff to account for partial reaction of the undetonated fuel.

  13. Efficiency assessment models of higher education institution staff activity

    Directory of Open Access Journals (Sweden)

    K. A. Dyusekeyev

    2016-01-01

    Full Text Available The paper substantiates the necessity of improvement of university staff incentive system under the conditions of competition in the field of higher education, the necessity to develop a separate model for the evaluation of the effectiveness of the department heads. The authors analysed the methods for assessing production function of units. The advantage of the application of the methods to assess the effectiveness of border economic structures in the field of higher education is shown. The choice of the data envelopment analysis method to solve the problem has proved. The model for evaluating of university departments activity on the basis of the DEAmethodology has developed. On the basis of operating in Russia, Kazakhstan and other countries universities staff pay systems the structure of the criteria system for university staff activity evaluation has been designed. For clarification and specification of the departments activity efficiency criteria a strategic map has been developed that allowed us to determine the input and output parameters of the model. DEA-methodology using takes into account a large number of input and output parameters, increases the assessment objectivity by excluding experts, receives interim data to identify the strengths and weaknesses of the evaluated object.

  14. Efficient transfer of sensitivity information in multi-component models

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Rabiti, Cristian

    2011-01-01

    In support of adjoint-based sensitivity analysis, this manuscript presents a new method to efficiently transfer adjoint information between components in a multi-component model, whereas the output of one component is passed as input to the next component. Often, one is interested in evaluating the sensitivities of the responses calculated by the last component to the inputs of the first component in the overall model. The presented method has two advantages over existing methods which may be classified into two broad categories: brute force-type methods and amalgamated-type methods. First, the presented method determines the minimum number of adjoint evaluations for each component as opposed to the brute force-type methods which require full evaluation of all sensitivities for all responses calculated by each component in the overall model, which proves computationally prohibitive for realistic problems. Second, the new method treats each component as a black-box as opposed to amalgamated-type methods which requires explicit knowledge of the system of equations associated with each component in order to reach the minimum number of adjoint evaluations. (author)

  15. An efficient method for model refinement in diffuse optical tomography

    Science.gov (United States)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  16. Efficient modeling for pulsed activation in inertial fusion energy reactors

    International Nuclear Information System (INIS)

    Sanz, J.; Yuste, P.; Reyes, S.; Latkowski, J.F.

    2000-01-01

    First structural wall material (FSW) materials in inertial fusion energy (IFE) power reactors will be irradiated under typical repetition rates of 1-10 Hz, for an operation time as long as the total reactor lifetime. The main objective of the present work is to determine whether a continuous-pulsed (CP) approach can be an efficient method in modeling the pulsed activation process for operating conditions of FSW materials. The accuracy and practicability of this method was investigated both analytically and (for reaction/decay chains of two and three nuclides) by computational simulation. It was found that CP modeling is an accurate and practical method for calculating the neutron-activation of FSW materials. Its use is recommended instead of the equivalent steady-state method or the exact pulsed modeling. Moreover, the applicability of this method to components of an IFE power plant subject to repetition rates lower than those of the FSW is still being studied. The analytical investigation was performed for 0.05 Hz, which could be typical for the coolant. Conclusions seem to be similar to those obtained for the FSW. However, further future work is needed for a final answer

  17. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.; Wildey, Tim; Xue, Guangri

    2010-01-01

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  18. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.

    2010-09-26

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  19. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    Science.gov (United States)

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  20. Incorporating C60 as Nucleation Sites Optimizing PbI2 Films To Achieve Perovskite Solar Cells Showing Excellent Efficiency and Stability via Vapor-Assisted Deposition Method.

    Science.gov (United States)

    Chen, Hai-Bin; Ding, Xi-Hong; Pan, Xu; Hayat, Tasawar; Alsaedi, Ahmed; Ding, Yong; Dai, Song-Yuan

    2018-01-24

    To achieve high-quality perovskite solar cells (PSCs), the morphology and carrier transportation of perovskite films need to be optimized. Herein, C 60 is employed as nucleation sites in PbI 2 precursor solution to optimize the morphology of perovskite films via vapor-assisted deposition process. Accompanying the homogeneous nucleation of PbI 2 , the incorporation of C 60 as heterogeneous nucleation sites can lower the nucleation free energy of PbI 2 , which facilitates the diffusion and reaction between PbI 2 and organic source. Meanwhile, C 60 could enhance carrier transportation and reduce charge recombination in the perovskite layer due to its high electron mobility and conductivity. In addition, the grain sizes of perovskite get larger with C 60 optimizing, which can reduce the grain boundaries and voids in perovskite and prevent the corrosion because of moisture. As a result, we obtain PSCs with a power conversion efficiency (PCE) of 18.33% and excellent stability. The PCEs of unsealed devices drop less than 10% in a dehumidification cabinet after 100 days and remain at 75% of the initial PCE during exposure to ambient air (humidity > 60% RH, temperature > 30 °C) for 30 days.

  1. A model to improve efficiency and effectiveness of safeguards measures

    International Nuclear Information System (INIS)

    D'Amato, Eduardo; Llacer, Carlos; Vicens, Hugo

    2001-01-01

    Full text: The main purpose of our current studies is to analyse the measures to be adopted tending to integrate the traditional safeguard measures to the ones stated in the Additional Protocol (AP). A simplified nuclear fuel cycle model is considered to draw some conclusions on the application of integrated safeguard measures. This paper includes a briefing, describing the historical review that gave birth to the A.P. and proposes a model to help the control bodies in the making decision process. In May 1997, the Board of Governors approved the Model Additional Protocol (MAP) which aimed at strengthening the effectiveness and improving the efficiency of safeguard measures. For States under a comprehensive safeguard agreement the measures adopted provide credible assurance on the absence of undeclared nuclear material and activities. In September 1999, the governments of Argentina and Brazil formally announced in the Board of Governors that both countries would start preliminary consultations on one adapted MAP applied to the Agreement between the Republic of Argentina, the Federative Republic of Brazil, the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials and the International Atomic Energy Agency for the Application of Safeguards (Quatripartite Agreement/INFCIRC 435). In December 1999, a first draft of the above mentioned document was provided as a starting point of discussion. During the year 2000 some modifications to the original draft took place. These were the initial steps in the process aiming at reaching the adequate conditions to adhere to the A.P. in each country in a future Having in mind the future AP implementation, the safeguards officers of the Regulatory Body of Argentina (ARN) began to think about the future simultaneous application of the two types of safeguards measures, the traditional and the non traditional ones, what should converge in an integrated system. By traditional safeguards it is understood quantitative

  2. Computationally efficient models of neuromuscular recruitment and mechanics.

    Science.gov (United States)

    Song, D; Raphael, G; Lan, N; Loeb, G E

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  3. Computationally efficient models of neuromuscular recruitment and mechanics

    Science.gov (United States)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  4. Climate Modelling Shows Increased Risk to Eucalyptus sideroxylon on the Eastern Coast of Australia Compared to Eucalyptus albens

    Directory of Open Access Journals (Sweden)

    Farzin Shabani

    2017-11-01

    Full Text Available Aim: To identify the extent and direction of range shift of Eucalyptus sideroxylon and E. albens in Australia by 2050 through an ensemble forecast of four species distribution models (SDMs. Each was generated using four global climate models (GCMs, under two representative concentration pathways (RCPs. Location: Australia. Methods: We used four SDMs of (i generalized linear model, (ii MaxEnt, (iii random forest, and (iv boosted regression tree to construct SDMs for species E. sideroxylon and E. albens under four GCMs including (a MRI-CGCM3, (b MIROC5, (c HadGEM2-AO and (d CCSM4, under two RCPs of 4.5 and 6.0. Here, the true skill statistic (TSS index was used to assess the accuracy of each SDM. Results: Results showed that E. albens and E. sideroxylon will lose large areas of their current suitable range by 2050 and E. sideroxylon is projected to gain in eastern and southeastern Australia. Some areas were also projected to remain suitable for each species between now and 2050. Our modelling showed that E. sideroxylon will lose suitable habitat on the western side and will not gain any on the eastern side because this region is one the most heavily populated areas in the country, and the populated areas are moving westward. The predicted decrease in E. sideroxylon’s distribution suggests that land managers should monitor its population closely, and evaluate whether it meets criteria for a protected legal status. Main conclusions: Both Eucalyptus sideroxylon and E. albens will be negatively affected by climate change and it is projected that E. sideroxylon will be at greater risk of losing habitat than E. albens.

  5. Coadministration of doxorubicin and etoposide loaded in camel milk phospholipids liposomes showed increased antitumor activity in a murine model

    Directory of Open Access Journals (Sweden)

    Maswadeh HM

    2015-04-01

    Full Text Available Hamzah M Maswadeh,1 Ahmed N Aljarbou,1 Mohammed S Alorainy,2 Arshad H Rahmani,3 Masood A Khan3 1Department of Pharmaceutics, College of Pharmacy, 2Department of Pharmacology and Therapeutics, College of Medicine, 3College of Applied Medical Sciences, Qassim University, Buraydah, Kingdom of Saudi Arabia Abstract: Small unilamellar vesicles from camel milk phospholipids (CML mixture or from 1,2 dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC were prepared, and anticancer drugs doxorubicin (Dox or etoposide (ETP were loaded. Liposomal formulations were used against fibrosarcoma in a murine model. Results showed a very high percentage of Dox encapsulation (~98% in liposomes (Lip prepared from CML-Lip or DPPC-Lip, whereas the percentage of encapsulations of ETP was on the lower side, 22% of CML-Lip and 18% for DPPC-Lip. Differential scanning calorimetry curves show that Dox enhances the lamellar formation in CML-Lip, whereas ETP enhances the nonlamellar formation. Differential scanning calorimetry curves also showed that the presence of Dox and ETP together into DPPC-Lip produced the interdigitation effect. The in vivo anticancer activity of liposomal formulations of Dox or ETP or a combination of both was assessed against benzopyrene (BAP-induced fibrosarcoma in a murine model. Tumor-bearing mice treated with a combination of Dox and ETP loaded into CML-Lip showed increased survival and reduced tumor growth compared to other groups, including the combination of Dox and ETP in DPPC-Lip. Fibrosarcoma-bearing mice treated with a combination of free (Dox + ETP showed much higher tumor growth compared to those groups treated with CML-Lip-(Dox + ETP or DPPC-Lip-(Dox + ETP. Immunohistochemical study was also performed to show the expression of tumor-suppressor PTEN, and it was found that the tumor tissues from the group of mice treated with a combination of free (Dox + ETP showed greater loss of cytoplasmic PTEN than tumor tissues obtained from the

  6. Experimentally infected domestic ducks show efficient transmission of Indonesian H5N1 highly pathogenic avian influenza virus, but lack persistent viral shedding.

    Science.gov (United States)

    Wibawa, Hendra; Bingham, John; Nuradji, Harimurti; Lowther, Sue; Payne, Jean; Harper, Jenni; Junaidi, Akhmad; Middleton, Deborah; Meers, Joanne

    2014-01-01

    Ducks are important maintenance hosts for avian influenza, including H5N1 highly pathogenic avian influenza viruses. A previous study indicated that persistence of H5N1 viruses in ducks after the development of humoral immunity may drive viral evolution following immune selection. As H5N1 HPAI is endemic in Indonesia, this mechanism may be important in understanding H5N1 evolution in that region. To determine the capability of domestic ducks to maintain prolonged shedding of Indonesian clade 2.1 H5N1 virus, two groups of Pekin ducks were inoculated through the eyes, nostrils and oropharynx and viral shedding and transmission investigated. Inoculated ducks (n = 15), which were mostly asymptomatic, shed infectious virus from the oral route from 1 to 8 days post inoculation, and from the cloacal route from 2-8 dpi. Viral ribonucleic acid was detected from 1-15 days post inoculation from the oral route and 1-24 days post inoculation from the cloacal route (cycle threshold ducks seroconverted in a range of serological tests by 15 days post inoculation. Virus was efficiently transmitted during acute infection (5 inoculation-infected to all 5 contact ducks). However, no evidence for transmission, as determined by seroconversion and viral shedding, was found between an inoculation-infected group (n = 10) and contact ducks (n = 9) when the two groups only had contact after 10 days post inoculation. Clinical disease was more frequent and more severe in contact-infected (2 of 5) than inoculation-infected ducks (1 of 15). We conclude that Indonesian clade 2.1 H5N1 highly pathogenic avian influenza virus does not persist in individual ducks after acute infection.

  7. Replaceable Substructures for Efficient Part-Based Modeling

    KAUST Repository

    Liu, Han; Vimont, Ulysse; Wand, Michael; Cani, Marie Paule; Hahmann, Stefanie; Rohmer, Damien; Mitra, Niloy J.

    2015-01-01

    A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  8. Replaceable Substructures for Efficient Part-Based Modeling

    KAUST Repository

    Liu, Han

    2015-05-01

    A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  9. Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.

    Science.gov (United States)

    Watanabe, Leandro; Myers, Chris J

    2016-08-19

    The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.

  10. THE MODEL FOR POWER EFFICIENCY ASSESSMENT OF CONDENSATION HEATING INSTALLATIONS

    Directory of Open Access Journals (Sweden)

    D. Kovalchuk

    2017-11-01

    Full Text Available The main part of heating systems and domestic hot water systems are based on the natural gas boilers. Forincreasing the overall performance of such heating system the condensation gas boilers was developed and are used. Howevereven such type of boilers don't use all energy which is released from a fuel combustion. The main factors influencing thelowering of overall performance of condensation gas boilers in case of operation in real conditions are considered. Thestructure of the developed mathematical model allowing estimating the overall performance of condensation gas boilers(CGB in the conditions of real operation is considered. Performace evaluation computer experiments of such CGB during aheating season for real weather conditions of two regions of Ukraine was made. Graphic dependences of temperatureconditions and heating system effectiveness change throughout a heating season are given. It was proved that normal CGBdoes not completely use all calorific value of fuel, thus, it isn't effective. It was also proved that the efficiency of such boilerssignificantly changes during a heating season depending on weather conditions and doesn't reach the greatest possible value.The possibility of increasing the efficiency of CGB due to hydraulic division of heating and condensation sections and use ofthe vapor-compression heat pump for deeper cooling of combustion gases and removing of the highest possible amount ofthermal energy from them are considered. The scheme of heat pump connection to the heating system with a convenient gasboiler and the separate condensation economizer allowing to cool combustion gases deeply below a dew point and to warm upthe return heat carrier before a boiler input is provided. The technological diagram of the year-round use of the heat pump forhot water heating after the end of heating season, without gas use is offered.

  11. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  12. 68Ga/177Lu-labeled DOTA-TATE shows similar imaging and biodistribution in neuroendocrine tumor model.

    Science.gov (United States)

    Liu, Fei; Zhu, Hua; Yu, Jiangyuan; Han, Xuedi; Xie, Qinghua; Liu, Teli; Xia, Chuanqin; Li, Nan; Yang, Zhi

    2017-06-01

    Somatostatin receptors are overexpressed in neuroendocrine tumors, whose endogenous ligands are somatostatin. DOTA-TATE is an analogue of somatostatin, which shows high binding affinity to somatostatin receptors. We aim to evaluate the 68 Ga/ 177 Lu-labeling DOTA-TATE kit in neuroendocrine tumor model for molecular imaging and to try human-positron emission tomography/computed tomography imaging of 68 Ga-DOTA-TATE in neuroendocrine tumor patients. DOTA-TATE kits were formulated and radiolabeled with 68 Ga/ 177 Lu for 68 Ga/ 177 Lu-DOTA-TATE (M-DOTA-TATE). In vitro and in vivo stability of 177 Lu-DOTA-TATE were performed. Nude mice bearing human tumors were injected with 68 Ga-DOTA-TATE or 177 Lu-DOTA-TATE for micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging separately, and clinical positron emission tomography/computed tomography images of 68 Ga-DOTA-TATE were obtained at 1 h post-intravenous injection from patients with neuroendocrine tumors. Micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging of 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE both showed clear tumor uptake which could be blocked by excess DOTA-TATE. In addition, 68 Ga-DOTA-TATE-positron emission tomography/computed tomography imaging in neuroendocrine tumor patients could show primary and metastatic lesions. 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE could accumulate in tumors in animal models, paving the way for better clinical peptide receptor radionuclide therapy for neuroendocrine tumor patients in Asian population.

  13. An efficient background modeling approach based on vehicle detection

    Science.gov (United States)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  14. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  15. The BACHD Rat Model of Huntington Disease Shows Specific Deficits in a Test Battery of Motor Function.

    Science.gov (United States)

    Manfré, Giuseppe; Clemensson, Erik K H; Kyriakou, Elisavet I; Clemensson, Laura E; van der Harst, Johanneke E; Homberg, Judith R; Nguyen, Huu Phuc

    2017-01-01

    Rationale : Huntington disease (HD) is a progressive neurodegenerative disorder characterized by motor, cognitive and neuropsychiatric symptoms. HD is usually diagnosed by the appearance of motor deficits, resulting in skilled hand use disruption, gait abnormality, muscle wasting and choreatic movements. The BACHD transgenic rat model for HD represents a well-established transgenic rodent model of HD, offering the prospect of an in-depth characterization of the motor phenotype. Objective : The present study aims to characterize different aspects of motor function in BACHD rats, combining classical paradigms with novel high-throughput behavioral phenotyping. Methods : Wild-type (WT) and transgenic animals were tested longitudinally from 2 to 12 months of age. To measure fine motor control, rats were challenged with the pasta handling test and the pellet reaching test. To evaluate gross motor function, animals were assessed by using the holding bar and the grip strength tests. Spontaneous locomotor activity and circadian rhythmicity were assessed in an automated home-cage environment, namely the PhenoTyper. We then integrated existing classical methodologies to test motor function with automated home-cage assessment of motor performance. Results : BACHD rats showed strong impairment in muscle endurance at 2 months of age. Altered circadian rhythmicity and locomotor activity were observed in transgenic animals. On the other hand, reaching behavior, forepaw dexterity and muscle strength were unaffected. Conclusions : The BACHD rat model exhibits certain features of HD patients, like muscle weakness and changes in circadian behavior. We have observed modest but clear-cut deficits in distinct motor phenotypes, thus confirming the validity of this transgenic rat model for treatment and drug discovery purposes.

  16. Thermodynamic modelling and efficiency analysis of a class of real indirectly fired gas turbine cycles

    Directory of Open Access Journals (Sweden)

    Ma Zheshu

    2009-01-01

    Full Text Available Indirectly or externally-fired gas-turbines (IFGT or EFGT are novel technology under development for small and medium scale combined power and heat supplies in combination with micro gas turbine technologies mainly for the utilization of the waste heat from the turbine in a recuperative process and the possibility of burning biomass or 'dirty' fuel by employing a high temperature heat exchanger to avoid the combustion gases passing through the turbine. In this paper, by assuming that all fluid friction losses in the compressor and turbine are quantified by a corresponding isentropic efficiency and all global irreversibilities in the high temperature heat exchanger are taken into account by an effective efficiency, a one dimensional model including power output and cycle efficiency formulation is derived for a class of real IFGT cycles. To illustrate and analyze the effect of operational parameters on IFGT efficiency, detailed numerical analysis and figures are produced. The results summarized by figures show that IFGT cycles are most efficient under low compression ratio ranges (3.0-6.0 and fit for low power output circumstances integrating with micro gas turbine technology. The model derived can be used to analyze and forecast performance of real IFGT configurations.

  17. An inducible transgenic mouse model for immune mediated hepatitis showing clearance of antigen expressing hepatocytes by CD8+ T cells.

    Directory of Open Access Journals (Sweden)

    Marcin Cebula

    Full Text Available The liver has the ability to prime immune responses against neo antigens provided upon infections. However, T cell immunity in liver is uniquely modulated by the complex tolerogenic property of this organ that has to also cope with foreign agents such as endotoxins or food antigens. In this respect, the nature of intrahepatic T cell responses remains to be fully characterized. To gain deeper insight into the mechanisms that regulate the CD8+ T cell responses in the liver, we established a novel OVA_X_CreER(T2 mouse model. Upon tamoxifen administration OVA antigen expression is observed in a fraction of hepatocytes, resulting in a mosaic expression pattern. To elucidate the cross-talk of CD8+ T cells with antigen-expressing hepatocytes, we adoptively transferred K(b/OVA257-264-specific OT-I T cells to OVA_X_CreER(T2 mice or generated triple transgenic OVA_X CreER(T2_X_OT-I mice. OT-I T cells become activated in OVA_X_CreER(T2 mice and induce an acute and transient hepatitis accompanied by liver damage. In OVA_X_CreER(T2_X_OT-I mice, OVA induction triggers an OT-I T cell mediated, fulminant hepatitis resulting in 50% mortality. Surviving mice manifest a long lasting hepatitis, and recover after 9 weeks. In these experimental settings, recovery from hepatitis correlates with a complete loss of OVA expression indicating efficient clearance of the antigen-expressing hepatocytes. Moreover, a relapse of hepatitis can be induced upon re-induction of cured OVA_X_CreER(T2_X_OT-I mice indicating absence of tolerogenic mechanisms. This pathogen-free, conditional mouse model has the advantage of tamoxifen inducible tissue specific antigen expression that reflects the heterogeneity of viral antigen expression and enables the study of intrahepatic immune responses to both de novo and persistent antigen. It allows following the course of intrahepatic immune responses: initiation, the acute phase and antigen clearance.

  18. Hong Kong Hospital Authority resource efficiency evaluation: Via a novel DEA-Malmquist model and Tobit regression model.

    Science.gov (United States)

    Guo, Hainan; Zhao, Yang; Niu, Tie; Tsui, Kwok-Leung

    2017-01-01

    The Hospital Authority (HA) is a statutory body managing all the public hospitals and institutes in Hong Kong (HK). In recent decades, Hong Kong Hospital Authority (HKHA) has been making efforts to improve the healthcare services, but there still exist some problems like unfair resource allocation and poor management, as reported by the Hong Kong medical legislative committee. One critical consequence of these problems is low healthcare efficiency of hospitals, leading to low satisfaction among patients. Moreover, HKHA also suffers from the conflict between limited resource and growing demand. An effective evaluation of HA is important for resource planning and healthcare decision making. In this paper, we propose a two-phase method to evaluate HA efficiency for reducing healthcare expenditure and improving healthcare service. Specifically, in Phase I, we measure the HKHA efficiency changes from 2000 to 2013 by applying a novel DEA-Malmquist index with undesirable factors. In Phase II, we further explore the impact of some exogenous factors (e.g., population density) on HKHA efficiency by Tobit regression model. Empirical results show that there are significant differences between the efficiencies of different hospitals and clusters. In particular, it is found that the public hospital serving in a richer district has a relatively lower efficiency. To a certain extent, this reflects the socioeconomic reality in HK that people with better economic condition prefers receiving higher quality service from the private hospitals.

  19. A framework for fuzzy model of thermoradiotherapy efficiency

    International Nuclear Information System (INIS)

    Kosterev, V.V.; Averkin, A.N.

    2005-01-01

    Full text: The use of hyperthermia as an adjuvant to radiation in the treatment of local and regional disease currently offers the most significant advantages. For processing of information of thermo radiotherapy efficiency, it is expedient to use the fuzzy logic based decision-support system - fuzzy system (FS). FSs are widely used in various application areas of control and decision making. Their popularity is due to the following reasons. Firstly, FS with triangular membership functions is universal approximator. Secondly, the designing of FS does not need the exact model of the process, but needs only qualitative linguistic dependences between the parameters. Thirdly, there are many program and hardware realizations of FS with very high speed of calculations. Fourthly, accuracy of the decisions received based on FS, usually is not worse and sometimes is better than accuracy of the decisions received by traditional methods. Moreover, dependence between input and output variables can be easily expressed in linguistic scales. The goal of this research is to choose the data fusion RULE's operators suitable to experimental results and taking into consideration uncertainty factor. Methods of aggregation and data fusion might be used which provide a methodology to extract comprehensible rules from data. Several data fusion algorithms have been developed and applied, individually and in combination, providing users with various levels of informational detail. In reviewing these emerging technology three basic categories (levels) of data fusion has been developed. These fusion levels are differentiated according to the amount of information they provide. Refs. 2 (author)

  20. An efficient and effective teaching model for ambulatory education.

    Science.gov (United States)

    Regan-Smith, Martha; Young, William W; Keller, Adam M

    2002-07-01

    Teaching and learning in the ambulatory setting have been described as inefficient, variable, and unpredictable. A model of ambulatory teaching that was piloted in three settings (1973-1981 in a university-affiliated outpatient clinic in Portland, Oregon, 1996-2000 in a community outpatient clinic, and 2000-2001 in an outpatient clinic serving Dartmouth Medical School's teaching hospital) that combines a system of education and a system of patient care is presented. Fully integrating learners into the office practice using creative scheduling, pre-rotation learning, and learner competence certification enabled the learners to provide care in roles traditionally fulfilled by physicians and nurses. Practice redesign made learners active members of the patient care team by involving them in such tasks as patient intake, histories and physicals, patient education, and monitoring of patient progress between visits. So that learners can be active members of the patient care team on the first day of clinic, pre-training is provided by the clerkship or residency so that they are able to competently provide care in the time available. To assure effective education, teaching and learning times are explicitly scheduled by parallel booking of patients for the learner and the preceptor at the same time. In the pilot settings this teaching model maintained or improved preceptor productivity and on-time efficiency compared with these outcomes of traditional scheduling. The time spent alone with patients, in direct observation by preceptors, and for scheduled case discussion was appreciated by learners. Increased satisfaction was enjoyed by learners, teachers, clinic staff, and patients. Barriers to implementation include too few examining rooms, inability to manipulate patient appointment schedules, and learners' not being present in a teaching clinic all the time.

  1. Partial-factor Energy Efficiency Model of Indonesia

    OpenAIRE

    Nugroho Fathul; Syaifudin Noor

    2018-01-01

    This study employs the partial-factor energy efficiency to reveal the relationships between energy efficiency and the consumption of both, the renewable energy and non-renewable energy in Indonesia. The findings confirm that consumption of non-renewable energy will increase the inefficiency in energy consumption. On the other side, the use of renewable energy will increase the energy efficiency in Indonesia. As the result, the Government of Indonesia may address this issue by providing more s...

  2. Lixisenatide, a drug developed to treat type 2 diabetes, shows neuroprotective effects in a mouse model of Alzheimer's disease.

    Science.gov (United States)

    McClean, Paula L; Hölscher, Christian

    2014-11-01

    Type 2 diabetes is a risk factor for developing Alzheimer's disease (AD). In the brains of AD patients, insulin signalling is desensitised. The incretin hormone Glucagon-like peptide-1 (GLP-1) facilitates insulin signalling, and analogues such as liraglutide are on the market as treatments for type 2 diabetes. We have previously shown that liraglutide showed neuroprotective effects in the APPswe/PS1ΔE9 mouse model of AD. Here, we test the GLP-1 receptor agonist lixisenatide in the same mouse model and compare the effects to liraglutide. After ten weeks of daily i.p. injections with liraglutide (2.5 or 25 nmol/kg) or lixisenatide (1 or 10 nmol/kg) or saline of APP/PS1 mice at an age when amyloid plaques had already formed, performance in an object recognition task was improved in APP/PS1 mice by both drugs at all doses tested. When analysing synaptic plasticity in the hippocampus, LTP was strongly increased in APP/PS1 mice by either drug. Lixisenatide (1 nmol/kg) was most effective. The reduction of synapse numbers seen in APP/PS1 mice was prevented by the drugs. The amyloid plaque load and dense-core Congo red positive plaque load in the cortex was reduced by both drugs at all doses. The chronic inflammation response (microglial activation) was also reduced by all treatments. The results demonstrate that the GLP-1 receptor agonists liraglutide and lixisenatide which are on the market as treatments for type 2 diabetes show promise as potential drug treatments of AD. Lixisenatide was equally effective at a lower dose compared to liraglutide in some of the parameters measured. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Novel AAV-based rat model of forebrain synucleinopathy shows extensive pathologies and progressive loss of cholinergic interneurons.

    Directory of Open Access Journals (Sweden)

    Patrick Aldrin-Kirk

    Full Text Available Synucleinopathies, characterized by intracellular aggregation of α-synuclein protein, share a number of features in pathology and disease progression. However, the vulnerable cell population differs significantly between the disorders, despite being caused by the same protein. While the vulnerability of dopamine cells in the substantia nigra to α-synuclein over-expression, and its link to Parkinson's disease, is well studied, animal models recapitulating the cortical degeneration in dementia with Lewy-bodies (DLB are much less mature. The aim of this study was to develop a first rat model of widespread progressive synucleinopathy throughout the forebrain using adeno-associated viral (AAV vector mediated gene delivery. Through bilateral injection of an AAV6 vector expressing human wild-type α-synuclein into the forebrain of neonatal rats, we were able to achieve widespread, robust α-synuclein expression with preferential expression in the frontal cortex. These animals displayed a progressive emergence of hyper-locomotion and dysregulated response to the dopaminergic agonist apomorphine. The animals receiving the α-synuclein vector displayed significant α-synuclein pathology including intra-cellular inclusion bodies, axonal pathology and elevated levels of phosphorylated α-synuclein, accompanied by significant loss of cortical neurons and a progressive reduction in both cortical and striatal ChAT positive interneurons. Furthermore, we found evidence of α-synuclein sequestered by IBA-1 positive microglia, which was coupled with a distinct change in morphology. In areas of most prominent pathology, the total α-synuclein levels were increased to, on average, two-fold, which is similar to the levels observed in patients with SNCA gene triplication, associated with cortical Lewy body pathology. This study provides a novel rat model of progressive cortical synucleinopathy, showing for the first time that cholinergic interneurons are vulnerable

  4. In vitro and in vivo models of cerebral ischemia show discrepancy in therapeutic effects of M2 macrophages.

    Directory of Open Access Journals (Sweden)

    Virginie Desestret

    Full Text Available THE INFLAMMATORY RESPONSE FOLLOWING ISCHEMIC STROKE IS DOMINATED BY INNATE IMMUNE CELLS: resident microglia and blood-derived macrophages. The ambivalent role of these cells in stroke outcome might be explained in part by the acquisition of distinct functional phenotypes: classically (M1 and alternatively activated (M2 macrophages. To shed light on the crosstalk between hypoxic neurons and macrophages, an in vitro model was set up in which bone marrow-derived macrophages were co-cultured with hippocampal slices subjected to oxygen and glucose deprivation. The results showed that macrophages provided potent protection against neuron cell loss through a paracrine mechanism, and that they expressed M2-type alternative polarization. These findings raised the possibility of using bone marrow-derived M2 macrophages in cellular therapy for stroke. Therefore, 2 million M2 macrophages (or vehicle were intravenously administered during the subacute stage of ischemia (D4 in a model of transient middle cerebral artery occlusion. Functional neuroscores and magnetic resonance imaging endpoints (infarct volumes, blood-brain barrier integrity, phagocytic activity assessed by iron oxide uptake were longitudinally monitored for 2 weeks. This cell-based treatment did not significantly improve any outcome measure compared with vehicle, suggesting that this strategy is not relevant to stroke therapy.

  5. Image-based multiscale mechanical modeling shows the importance of structural heterogeneity in the human lumbar facet capsular ligament.

    Science.gov (United States)

    Zarei, Vahhab; Liu, Chao J; Claeson, Amy A; Akkin, Taner; Barocas, Victor H

    2017-08-01

    The lumbar facet capsular ligament (FCL) primarily consists of aligned type I collagen fibers that are mainly oriented across the joint. The aim of this study was to characterize and incorporate in-plane local fiber structure into a multiscale finite element model to predict the mechanical response of the FCL during in vitro mechanical tests, accounting for the heterogeneity in different scales. Characterization was accomplished by using entire-domain polarization-sensitive optical coherence tomography to measure the fiber structure of cadaveric lumbar FCLs ([Formula: see text]). Our imaging results showed that fibers in the lumbar FCL have a highly heterogeneous distribution and are neither isotropic nor completely aligned. The averaged fiber orientation was [Formula: see text] ([Formula: see text] in the inferior region and [Formula: see text] in the middle and superior regions), with respect to lateral-medial direction (superior-medial to inferior-lateral). These imaging data were used to construct heterogeneous structural models, which were then used to predict experimental gross force-strain behavior and the strain distribution during equibiaxial and strip biaxial tests. For equibiaxial loading, the structural model fit the experimental data well but underestimated the lateral-medial forces by [Formula: see text]16% on average. We also observed pronounced heterogeneity in the strain field, with stretch ratios for different elements along the lateral-medial axis of sample typically ranging from about 0.95 to 1.25 during a 12% strip biaxial stretch in the lateral-medial direction. This work highlights the multiscale structural and mechanical heterogeneity of the lumbar FCL, which is significant both in terms of injury prediction and microstructural constituents' (e.g., neurons) behavior.

  6. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  7. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  8. The relative efficiency of Iranian's rural traffic police: a three-stage DEA model.

    Science.gov (United States)

    Rahimi, Habibollah; Soori, Hamid; Nazari, Seyed Saeed Hashemi; Motevalian, Seyed Abbas; Azar, Adel; Momeni, Eskandar; Javartani, Mehdi

    2017-10-13

    Road traffic Injuries (RTIs) as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran's rural traffic police efficiency. This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA) model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation) of damage, injury and fatal accidents were 247.7 (258.4), 184.9 (176.9), and 28.7 (19.5), respectively. Input averages were 5.9 (3.0) patrol teams, 0.5% (0.2) manpower proportions, 7.5 (2.9) patrol cars, 0.5 (1.3) motorcycles, 77,279.1 (46,794.7) penalties, 90.9 (2.8) cultural and educational activity score, 0.7 (2.4) speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Iran's traffic police has potential opportunity to reduce

  9. Demographical history and palaeodistribution modelling show range shift towards Amazon Basin for a Neotropical tree species in the LGM.

    Science.gov (United States)

    Vitorino, Luciana Cristina; Lima-Ribeiro, Matheus S; Terribile, Levi Carina; Collevatti, Rosane G

    2016-10-13

    We studied the phylogeography and demographical history of Tabebuia serratifolia (Bignoniaceae) to understand the disjunct geographical distribution of South American seasonally dry tropical forests (SDTFs). We specifically tested if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the last glacial maximum (LGM), the so called South American dry forest refugia hypothesis, using ecological niche modelling (ENM) and statistical phylogeography. We sampled 235 individuals of T. serratifolia in 17 populations in Brazil and analysed the polymorphisms at three intergenic chloroplast regions and ITS nuclear ribosomal DNA. Coalescent analyses showed a demographical expansion at the last c. 130 ka (thousand years before present). Simulations and ENM also showed that the current spatial pattern of genetic diversity is most likely due to a scenario of range expansion and range shift towards the Amazon Basin during the colder and arid climatic conditions associated with the LGM, matching the expected for the South American dry forest refugia hypothesis, although contrasting to the Pleistocene Arc hypothesis. Populations in more stable areas or with higher suitability through time showed higher genetic diversity. Postglacial range shift towards the Southeast and Atlantic coast may have led to spatial genome assortment due to leading edge colonization as the species tracks suitable environments, leading to lower genetic diversity in populations at higher distance from the distribution centroid at 21 ka. Haplotype sharing or common ancestry among populations from Caatinga in Northeast Brazil, Atlantic Forest in Southeast and Cerrado biome and ENM evince the past connection among these biomes.

  10. Model based design of efficient power take-off systems for wave energy converters

    DEFF Research Database (Denmark)

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2011-01-01

    The Power Take-Off (PTO) is the core of a Wave Energy Converter (WECs), being the technology converting wave induced oscillations from mechanical energy to electricity. The induced oscillations are characterized by being slow with varying frequency and amplitude. Resultantly, fluid power is often...... an essential part of the PTO, being the only technology having the required force densities. The focus of this paper is to show the achievable efficiency of a PTO system based on a conventional hydro-static transmission topology. The design is performed using a model based approach. Generic component models...

  11. Evaluation on the efficiency of the construction sector companies in Malaysia with data envelopment analysis model

    Science.gov (United States)

    Weng Hoe, Lam; Jinn, Lim Shun; Weng Siew, Lam; Hai, Tey Kim

    2018-04-01

    In Malaysia, construction sector is essential parts in driving the development of the Malaysian economy. Construction industry is an economic investment and its relationship with economic development is well posited. However, the evaluation on the efficiency of the construction sectors companies listed in Kuala Lumpur Stock Exchange (KLSE) with Data Analysis Envelopment (DEA) model have not been actively studied by the past researchers. Hence the purpose of this study is to examine the financial performance the listed construction sectors companies in Malaysia in the year of 2015. The results of this study show that the efficiency of construction sectors companies can be obtained by using DEA model through ratio analysis which defined as the ratio of total outputs to total inputs. This study is significant because the inefficient companies are identified for potential improvement.

  12. Studies Show Curricular Efficiency Can Be Attained.

    Science.gov (United States)

    Walberg, Herbert J.

    1987-01-01

    Reviews the nine factors contributing to educational productivity, the effectiveness of instructional techniques (mastery learning ranks high and Skinnerian reinforcement has the largest overall effect), and the effects of psychological enviroments on learning. Includes references and a table. (MD)

  13. Xiao-Qing-Long-Tang shows preventive effect of asthma in an allergic asthma mouse model through neurotrophin regulation

    Science.gov (United States)

    2013-01-01

    Background This study investigates the effect of Xiao-Qing-Long-Tang (XQLT) on neurotrophin in an established mouse model of Dermatophagoides pteronyssinus (Der p)-induced acute allergic asthma and in a LA4 cell line model of lung adenoma. The effects of XQLT on the regulation of nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF), airway hyper-responsiveness (AHR) and immunoglobulin E were measured. Methods LA4 cells were stimulated with 100 μg/ml Der p 24 h and the supernatant was collected for ELISA analysis. Der p-stimulated LA4 cells with either XQLT pre-treatment or XQLT co-treatment were used to evaluate the XQLT effect on neurotrophin. Balb/c mice were sensitized on days 0 and 7 with a base-tail injection of 50 μg Dermatophagoides pteronyssinus (Der p) that was emulsified in 50 μl incomplete Freund’s adjuvant (IFA). On day 14, mice received an intra-tracheal challenge of 50 μl Der p (2 mg/ml). XQLT (1g/Kg) was administered orally to mice either on days 2, 4, 6, 8, 10 and 12 as a preventive strategy or on day 15 as a therapeutic strategy. Results XQLT inhibited expression of those NGF, BDNF and thymus-and activation-regulated cytokine (TARC) in LA4 cells that were subjected to a Der p allergen. Both preventive and therapeutic treatments with XQLT in mice reduced AHR. Preventive treatment with XQLT markedly decreased NGF in broncho-alveolar lavage fluids (BALF) and BDNF in serum, whereas therapeutic treatment reduced only serum BDNF level. The reduced NGF levels corresponded to a decrease in AHR by XQLT treatment. Reduced BALF NGF and TARC and serum BDNF levels may have been responsible for decreased eosinophil infiltration into lung tissue. Immunohistochemistry showed that p75NTR and TrkA levels were reduced in the lungs of mice under both XQLT treatment protocols, and this reduction may have been correlated with the prevention of the asthmatic reaction by XQLT. Conclusion XQLT alleviated allergic inflammation including AHR, Ig

  14. The Small Heat Shock Protein α-Crystallin B Shows Neuroprotective Properties in a Glaucoma Animal Model

    Directory of Open Access Journals (Sweden)

    Fabian Anders

    2017-11-01

    Full Text Available Glaucoma is a neurodegenerative disease that leads to irreversible retinal ganglion cell (RGC loss and is one of the main causes of blindness worldwide. The pathogenesis of glaucoma remains unclear, and novel approaches for neuroprotective treatments are urgently needed. Previous studies have revealed significant down-regulation of α-crystallin B as an initial reaction to elevated intraocular pressure (IOP, followed by a clear but delayed up-regulation, suggesting that this small heat-shock protein plays a pathophysiological role in the disease. This study analyzed the neuroprotective effect of α-crystallin B in an experimental animal model of glaucoma. Significant IOP elevation induced by episcleral vein cauterization resulted in a considerable impairment of the RGCs and the retinal nerve fiber layer. An intravitreal injection of α-crystallin B at the time of the IOP increase was able to rescue the RGCs, as measured in a functional photopic electroretinogram, retinal nerve fiber layer thickness, and RGC counts. Mass-spectrometry-based proteomics and antibody-microarray measurements indicated that a α-crystallin injection distinctly up-regulated all of the subclasses (α, β, and γ of the crystallin protein family. The creation of an interactive protein network revealed clear correlations between individual proteins, which showed a regulatory shift resulting from the crystallin injection. The neuroprotective properties of α-crystallin B further demonstrate the potential importance of crystallin proteins in developing therapeutic options for glaucoma.

  15. Fourier transform infrared imaging showing reduced unsaturated lipid content in the hippocampus of a mouse model of Alzheimer's disease.

    Science.gov (United States)

    Leskovjan, Andreana C; Kretlow, Ariane; Miller, Lisa M

    2010-04-01

    Polyunsaturated fatty acids are essential to brain functions such as membrane fluidity, signal transduction, and cell survival. It is also thought that low levels of unsaturated lipid in the brain may contribute to Alzheimer's disease (AD) risk or severity. However, it is not known how accumulation of unsaturated lipids is affected in different regions of the hippocampus, which is a central target of AD plaque pathology, during aging. In this study, we used Fourier transform infrared imaging (FTIRI) to visualize the unsaturated lipid content in specific regions of the hippocampus in the PSAPP mouse model of AD as a function of plaque formation. Specifically, the unsaturated lipid content was imaged using the olefinic =CH stretching mode at 3012 cm(-1). The axonal, dendritic, and somatic layers of the hippocampus were examined in the mice at 13, 24, 40, and 56 weeks old. Results showed that lipid unsaturation in the axonal layer was significantly increased with normal aging in control (CNT) mice (p avoiding progression of the disease.

  16. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  17. Feed Forward Artificial Neural Network Model to Estimate the TPH Removal Efficiency in Soil Washing Process

    Directory of Open Access Journals (Sweden)

    Hossein Jafari Mansoorian

    2017-01-01

    Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of   TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.

  18. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  19. A resource allocation model to support efficient air quality ...

    African Journals Online (AJOL)

    Efficient implementation of policies and strategies require that ... †Graduate School of Business Leadership, University of South Africa, P.O. Box 392, Pretoria, 0003, .... and source, emissions, air quality and meteorological data reporting.

  20. Efficient anisotropic wavefield extrapolation using effective isotropic models

    KAUST Repository

    Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad

    2013-01-01

    Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented

  1. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  2. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    Directory of Open Access Journals (Sweden)

    J. G. Barr

    2013-03-01

    Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and

  3. Energy efficiency analysis method based on fuzzy DEA cross-model for ethylene production systems in chemical industry

    International Nuclear Information System (INIS)

    Han, Yongming; Geng, Zhiqiang; Zhu, Qunxiong; Qu, Yixin

    2015-01-01

    DEA (data envelopment analysis) has been widely used for the efficiency analysis of industrial production process. However, the conventional DEA model is difficult to analyze the pros and cons of the multi DMUs (decision-making units). The DEACM (DEA cross-model) can distinguish the pros and cons of the effective DMUs, but it is unable to take the effect of the uncertainty data into account. This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with Fuzzy Data. The proposed method has better objectivity and resolving power for the decision-making. First we obtain the minimum, the median and the maximum values of the multi-criteria ethylene energy consumption data by the data fuzzification. On the basis of the multi-criteria fuzzy data, the benchmark of the effective production situations and the improvement directions of the ineffective of the ethylene plants under different production data configurations are obtained by the FDEACM. The experimental result shows that the proposed method can improve the ethylene production conditions and guide the efficiency of energy utilization during ethylene production process. - Highlights: • This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with data fuzzification. • The proposed method is more efficient and accurate than other methods. • We obtain an energy efficiency analysis framework and process based on FDEACM in ethylene production industry. • The proposed method is valid and efficient in improvement of energy efficiency in the ethylene plants

  4. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  5. Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking

    Directory of Open Access Journals (Sweden)

    Christian Appold

    2010-06-01

    Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.

  6. Modeling energy efficiency to improve air quality and health effects of China’s cement industry

    International Nuclear Information System (INIS)

    Zhang, Shaohui; Worrell, Ernst; Crijns-Graus, Wina; Krol, Maarten; Bruine, Marco de; Geng, Guangpo; Wagner, Fabian; Cofala, Janusz

    2016-01-01

    Highlights: • An integrated model was used to model the co-benefits for China’s cement industry. • PM_2_._5 would decrease by 2–4% by 2030 through improved energy efficiency. • 10,000 premature deaths would be avoided per year relative to the baseline scenario. • Total benefits are about two times higher than the energy efficiency costs. - Abstract: Actions to reduce the combustion of fossil fuels often decrease GHG emissions as well as air pollutants and bring multiple benefits for improvement of energy efficiency, climate change, and air quality associated with human health benefits. The China’s cement industry is the second largest energy consumer and key emitter of CO_2 and air pollutants, which accounts for 7% of China’s total energy consumption, 15% of CO_2, and 14% of PM_2_._5, respectively. In this study, a state-of-the art modeling framework is developed that comprises a number of different methods and tools within the same platform (i.e. provincial energy conservation supply curves, the Greenhouse Gases and Air Pollution Interactions and Synergies, ArcGIS, the global chemistry Transport Model, version 5, and Health Impact Assessment) to assess the potential for energy savings and emission mitigation of CO_2 and PM_2_._5, as well as the health impacts of pollution arising from China’s cement industry. The results show significant heterogeneity across provinces in terms of the potential for PM_2_._5 emission reduction and PM_2_._5 concentration, as well as health impacts caused by PM_2_._5. Implementation of selected energy efficiency measures would decrease total PM_2_._5 emissions by 2% (range: 1–4%) in 2020 and 4% (range: 2–8%) by 2030, compared to the baseline scenario. The reduction potential of provincial annual PM_2_._5 concentrations range from 0.03% to 2.21% by 2030 respectively, when compared to the baseline scenario. 10,000 premature deaths are avoided by 2020 and 2030 respectively relative to baseline scenario. The

  7. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    Science.gov (United States)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  8. Intercomparison of terrestrial carbon fluxes and carbon use efficiency simulated by CMIP5 Earth System Models

    Science.gov (United States)

    Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun

    2017-12-01

    This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.

  9. A physiological foundation for the nutrition-based efficiency wage model

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    2011-01-01

    Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model...

  10. Efficient modeling of chiral media using SCN-TLM method

    Directory of Open Access Journals (Sweden)

    Yaich M.I.

    2004-01-01

    Full Text Available An efficient approach allowing to include linear bi-isotropic chiral materials in time-domain transmission line matrix (TLM calculations by employing recursive evaluation of the convolution of the electric and magnetic fields and susceptibility functions is presented. The new technique consists to add both voltage and current sources in supplementary stubs of the symmetrical condensed node (SCN of the TLM method. In this article, the details and the complete description of this approach are given. A comparison of the obtained numerical results with those of the literature reflects its validity and efficiency.

  11. Efficient modeling of photonic crystals with local Hermite polynomials

    International Nuclear Information System (INIS)

    Boucher, C. R.; Li, Zehao; Albrecht, J. D.; Ram-Mohan, L. R.

    2014-01-01

    Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (plane wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits

  12. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras

    Science.gov (United States)

    Andrade, Jefferson O.; Kameyama, Yukiyoshi

    Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In this paper, we propose a novel, efficient algorithm for multi-valued bounded model checking. A notable feature of our algorithm is that it is not based on reduction of multi-values into two-values; instead, it generates a single formula which represents multi-valuedness by a suitable encoding, and asks a standard SAT solver to check its satisfiability. Our experimental results show a significant improvement in the number of variables and clauses and also in execution time compared with the reduction-based one.

  13. Validated biomechanical model for efficiency and speed of rowing.

    Science.gov (United States)

    Pelz, Peter F; Vergé, Angela

    2014-10-17

    The speed of a competitive rowing crew depends on the number of crew members, their body mass, sex and the type of rowing-sweep rowing or sculling. The time-averaged speed is proportional to the rower's body mass to the 1/36th power, to the number of crew members to the 1/9th power and to the physiological efficiency (accounted for by the rower's sex) to the 1/3rd power. The quality of the rowing shell and propulsion system is captured by one dimensionless parameter that takes the mechanical efficiency, the shape and drag coefficient of the shell and the Froude propulsion efficiency into account. We derive the biomechanical equation for the speed of rowing by two independent methods and further validate it by successfully predicting race times. We derive the theoretical upper limit of the Froude propulsion efficiency for low viscous flows. This upper limit is shown to be a function solely of the velocity ratio of blade to boat speed (i.e., it is completely independent of the blade shape), a result that may also be of interest for other repetitive propulsion systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Modeling the irradiance dependency of the quantum efficiency of potosynthesis

    NARCIS (Netherlands)

    Silsbe, G.M.; Kromkamp, J.C.

    2012-01-01

    Measures of the quantum efficiency of photosynthesis (phi(PSII)) across an irradiance (E) gradient are an increasingly common physiological assay and alternative to traditional photosynthetic-irradiance (PE) assays. Routinely, the analysis and interpretation of these data are analogous to PE

  15. Modeling Vertical Flow Treatment Wetland Hydraulics to Optimize Treatment Efficiency

    Science.gov (United States)

    2011-03-24

    be forced to flow in a 90 serpentine manner back and forth as it moves upward through the wetland (think waiting in line at Disneyland ). This...Flow Treatment Wetland Hydraulics to Optimize Treatment Efficiency 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  16. Modeling efficient resource allocation patterns for arable crop ...

    African Journals Online (AJOL)

    optimum plans. This should be complemented with strong financial support, farm advisory services and adequate supply of modern inputs at fairly competitive prices would enhance the prospects of the small holder farmers. Keywords: efficient, resource allocation, optimization, linear programming, gross margin ...

  17. A resource allocation model to support efficient air quality ...

    African Journals Online (AJOL)

    Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz.

  18. Cost efficiency of Japanese steam power generation companies: A Bayesian comparison of random and fixed frontier models

    Energy Technology Data Exchange (ETDEWEB)

    Assaf, A. George [Isenberg School of Management, University of Massachusetts-Amherst, 90 Campus Center Way, Amherst 01002 (United States); Barros, Carlos Pestana [Instituto Superior de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Managi, Shunsuke [Graduate School of Environmental Studies, Tohoku University, 6-6-20 Aramaki-Aza Aoba, Aoba-Ku, Sendai 980-8579 (Japan)

    2011-04-15

    This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO{sub 2} emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results. (author)

  19. Efficient ECG Signal Compression Using Adaptive Heart Model

    National Research Council Canada - National Science Library

    Szilagyi, S

    2001-01-01

    This paper presents an adaptive, heart-model-based electrocardiography (ECG) compression method. After conventional pre-filtering the waves from the signal are localized and the model's parameters are determined...

  20. ESTIMATION OF EFFICIENCY OF THE COMPETITIVE COOPERATION MODEL

    Directory of Open Access Journals (Sweden)

    Natalia N. Liparteliani

    2014-01-01

    Full Text Available Competitive cooperation model of regional travel agencies and travel market participants is considered. Evaluation of the model using mathematical and statistical methods was carried out. Relationship marketing provides a travel company certain economic advantages.

  1. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  2. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Science.gov (United States)

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  3. Efficient Modelling, Generation and Analysis of Markov Automata

    NARCIS (Netherlands)

    Timmer, Mark

    2013-01-01

    Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of the models involved allow more types of systems to be analysed, but also raise the difficulty of their

  4. Oracle Efficient Variable Selection in Random and Fixed Effects Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    This paper generalizes the results for the Bridge estimator of Huang et al. (2008) to linear random and fixed effects panel data models which are allowed to grow in both dimensions. In particular we show that the Bridge estimator is oracle efficient. It can correctly distinguish between relevant...... and irrelevant variables and the asymptotic distribution of the estimators of the coefficients of the relevant variables is the same as if only these had been included in the model, i.e. as if an oracle had revealed the true model prior to estimation. In the case of more explanatory variables than observations......, we prove that the Marginal Bridge estimator can asymptotically correctly distinguish between relevant and irrelevant explanatory variables. We do this without restricting the dependence between covariates and without assuming sub Gaussianity of the error terms thereby generalizing the results...

  5. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  6. Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks

    International Nuclear Information System (INIS)

    Naimi, Yaghoob; Naimi, Mohammad

    2013-01-01

    We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader (SS) and the spreader-stifler (SR) interactions have the same rate α, we define α (1) and α (2) for SS and SR interactions, respectively. The effect of variation of α (1) and α (2) on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor. (interdisciplinary physics and related areas of science and technology)

  7. Robust and efficient solution procedures for association models

    DEFF Research Database (Denmark)

    Michelsen, Michael Locht

    2006-01-01

    Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....

  8. Hybrid liposomes showing enhanced accumulation in tumors as theranostic agents in the orthotopic graft model mouse of colorectal cancer.

    Science.gov (United States)

    Okumura, Masaki; Ichihara, Hideaki; Matsumoto, Yoko

    2018-11-01

    Hybrid liposomes (HLs) can be prepared by simply sonicating a mixture of vesicular and micellar molecules in a buffer solution. This study aimed to elucidate the therapeutic effects and ability of HLs to detect (diagnosis) cancer in an orthotopic graft mouse model of colorectal cancer with HCT116 cells for the use of HLs as theranostic agents. In the absence of a chemotherapeutic drug, HLs exhibited therapeutic effects by inhibiting the growth of HCT116 colorectal cancer cells in vitro, possibly through an increase in apoptosis. Intravenously administered HLs also caused a remarkable reduction in the relative cecum weight in an orthotopic graft mouse model of colorectal cancer. A decrease in tumor size in the cecal sections was confirmed by histological analysis using HE staining. TUNEL staining indicated an induction of apoptosis in HCT116 cells in the orthotopic graft mouse model of colorectal cancer. For the detection (diagnosis) of colorectal cancer by HLs, the accumulation of HLs encapsulating a fluorescent probe (ICG) was observed in HCT116 cells in the in vivo colorectal cancer model following intravenous administration. These data indicate that HLs can accumulate in tumor cells in the cecum of the orthotopic graft mouse model of colorectal cancer for a prolonged period of time, and inhibit the growth of HCT116 cells.

  9. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  10. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    Directory of Open Access Journals (Sweden)

    Thanh Dinh

    2016-06-01

    Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  11. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    Science.gov (United States)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  12. The Kallikrein Inhibitor from Bauhinia bauhinioides (BbKI) shows antithrombotic properties in venous and arterial thrombosis models.

    Science.gov (United States)

    Brito, Marlon V; de Oliveira, Cleide; Salu, Bruno R; Andrade, Sonia A; Malloy, Paula M D; Sato, Ana C; Vicente, Cristina P; Sampaio, Misako U; Maffei, Francisco H A; Oliva, Maria Luiza V

    2014-05-01

    The Bauhinia bauhinioides Kallikrein Inhibitor (BbKI) is a Kunitz-type serine peptidase inhibitor of plant origin that has been shown to impair the viability of some tumor cells and to feature a potent inhibitory activity against human and rat plasma kallikrein (Kiapp 2.4 nmol/L and 5.2 nmol/L, respectively). This inhibitory activity is possibly responsible for an effect on hemostasis by prolonging activated partial thromboplastin time (aPTT). Because the association between cancer and thrombosis is well established, we evaluated the possible antithrombotic activity of this protein in venous and arterial thrombosis models. Vein thrombosis was studied in the vena cava ligature model in Wistar rats, and arterial thrombosis in the photochemical induced endothelium lesion model in the carotid artery of C57 black 6 mice. BbKI at a concentration of 2.0 mg/kg reduced the venous thrombus weight by 65% in treated rats in comparison to rats in the control group. The inhibitor prolonged the time for total artery occlusion in the carotid artery model mice indicating that this potent plasma kallikrein inhibitor prevented thrombosis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Improved quantum efficiency models of CZTSe: GE nanolayer solar cells with a linear electric field.

    Science.gov (United States)

    Lee, Sanghyun; Price, Kent J; Saucedo, Edgardo; Giraldo, Sergio

    2018-02-08

    We fabricated and characterized CZTSe:Ge nanolayer (quantum efficiency for Ge doped CZTSe devices. The linear electric field model is developed with the incomplete gamma function of the quantum efficiency as compared to the empirical data at forward bias conditions. This model is characterized with a consistent set of parameters from a series of measurements and the literature. Using the analytical modelling method, the carrier collection profile in the absorber is calculated and closely fitted by the developed mathematical expressions to identify the carrier dynamics during the quantum efficiency measurement of the device. The analytical calculation is compared with the measured quantum efficiency data at various bias conditions.

  14. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  15. Efficient Modeling and Migration in Anisotropic Media Based on Prestack Exploding Reflector Model and Effective Anisotropy

    KAUST Repository

    Wang, Hui

    2014-05-01

    This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth\\'s subsurface requires modeling and migration as building blocks. The challenge comes from two aspects. First, the underlying governing equations for seismic wave propagation in anisotropic media are far more complicated than that in isotropic media which demand higher computational costs to solve. Second, the usage of whole prestack seismic data still remains a burden considering its storage volume and the existing wave equation solvers. In this thesis, I develop two approaches to tackle the challenges. In the first part, I adopt the concept of prestack exploding reflector model to handle the whole prestack data and bridge the data space directly to image space in a single kernel. I formulate the extrapolation operator in a two-way fashion to remove he restriction on directions that waves propagate. I also develop a generic method for phase velocity evaluation within anisotropic media used in this extrapolation kernel. The proposed method provides a tool for generating prestack images without wavefield cross correlations. In the second part of this thesis, I approximate the anisotropic models using effective isotropic models. The wave phenomena in these effective models match that in anisotropic models both kinematically and dynamically. I obtain the effective models through equating eikonal equations and transport equations of anisotropic and isotropic models, thereby in the high frequency asymptotic approximation sense. The wavefields extrapolation costs are thus reduced using isotropic wave equation solvers while the anisotropic effects are maintained through this approach. I benchmark the two proposed methods using synthetic datasets. Tests on anisotropic Marmousi model and anisotropic BP2007 model demonstrate the applicability of my

  16. Advanced imaging techniques show progressive arthropathy following experimentally induced knee bleeding in a factor VIII-/- rat model

    DEFF Research Database (Denmark)

    Sorensen, K. R.; Roepstorff, K.; Petersen, M.

    2015-01-01

    Background: Joint pathology is most commonly assessed by radiogra-phy, but ultrasonography (US) is increasingly recognized for its acces-sibility, safety and ability to show soft tissue changes, the earliestindicators of haemophilic arthropathy (HA). US, however, lacks theability to visualize...

  17. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling

    DEFF Research Database (Denmark)

    Albaek, Mads O.; Gernaey, Krist V.; Hansen, Morten S.

    2012-01-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of ce...... was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency.......Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression...... of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model...

  18. Automated home cage assessment shows behavioral changes in a transgenic mouse model of spinocerebellar ataxia type 17.

    Science.gov (United States)

    Portal, Esteban; Riess, Olaf; Nguyen, Huu Phuc

    2013-08-01

    Spinocerebellar Ataxia type 17 (SCA17) is an autosomal dominantly inherited, neurodegenerative disease characterized by ataxia, involuntary movements, and dementia. A novel SCA17 mouse model having a 71 polyglutamine repeat expansion in the TATA-binding protein (TBP) has shown age related motor deficit using a classic motor test, yet concomitant weight increase might be a confounding factor for this measurement. In this study we used an automated home cage system to test several motor readouts for this same model to confirm pathological behavior results and evaluate benefits of automated home cage in behavior phenotyping. Our results confirm motor deficits in the Tbp/Q71 mice and present previously unrecognized behavioral characteristics obtained from the automated home cage, indicating its use for high-throughput screening and testing, e.g. of therapeutic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. A Mouse Model of Hyperproliferative Human Epithelium Validated by Keratin Profiling Shows an Aberrant Cytoskeletal Response to Injury

    Directory of Open Access Journals (Sweden)

    Samal Zhussupbekova

    2016-07-01

    Full Text Available A validated animal model would assist with research on the immunological consequences of the chronic expression of stress keratins KRT6, KRT16, and KRT17, as observed in human pre-malignant hyperproliferative epithelium. Here we examine keratin gene expression profile in skin from mice expressing the E7 oncoprotein of HPV16 (K14E7 demonstrating persistently hyperproliferative epithelium, in nontransgenic mouse skin, and in hyperproliferative actinic keratosis lesions from human skin. We demonstrate that K14E7 mouse skin overexpresses stress keratins in a similar manner to human actinic keratoses, that overexpression is a consequence of epithelial hyperproliferation induced by E7, and that overexpression further increases in response to injury. As stress keratins modify local immunity and epithelial cell function and differentiation, the K14E7 mouse model should permit study of how continued overexpression of stress keratins impacts on epithelial tumor development and on local innate and adaptive immunity.

  20. Betting on change: Tenet deal with Vanguard shows it's primed to try ACO effort, new payment model.

    Science.gov (United States)

    Kutscher, Beth

    2013-07-01

    Tenet Healthcare Corp.'s acquisition of Vanguard Health Systems is a sign the investor-owned chain is willing to take a chance on alternative payment models such as accountable care organizations. There's no certainty that ACOs will deliver the improvements on quality or cost savings, but Vanguard Vice Chairman Keith Pitts, left, says his system's Pioneer ACO in Detroit has already achieved some cost savings.

  1. Mathematical modelling as basis for efficient enterprise management

    Directory of Open Access Journals (Sweden)

    Kalmykova Svetlana

    2017-01-01

    Full Text Available The choice of the most effective HR- management style at the enterprise is based on modeling various socio-economic situations. The article describes the formalization of the managing processes aimed at the interaction between the allocated management subsystems. The mathematical modelling tools are used to determine the time spent on recruiting personnel for key positions in the management hierarchy selection.

  2. Tests of control in the Audit Risk Model : Effective? Efficient?

    NARCIS (Netherlands)

    Blokdijk, J.H. (Hans)

    2004-01-01

    Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes

  3. Modeling Large Time Series for Efficient Approximate Query Processing

    DEFF Research Database (Denmark)

    Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang

    2015-01-01

    query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our...

  4. Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Karali, Nihan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-12-12

    The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.

  5. Efficient probabilistic model checking on general purpose graphic processors

    NARCIS (Netherlands)

    Bosnacki, D.; Edelkamp, S.; Sulewski, D.; Pasareanu, C.S.

    2009-01-01

    We present algorithms for parallel probabilistic model checking on general purpose graphic processing units (GPGPUs). For this purpose we exploit the fact that some of the basic algorithms for probabilistic model checking rely on matrix vector multiplication. Since this kind of linear algebraic

  6. An efficient visual saliency detection model based on Ripplet transform

    Indian Academy of Sciences (India)

    A Diana Andrushia

    human visual attention models is still not well investigated. ... Ripplet transform; visual saliency model; Receiver Operating Characteristics (ROC); .... proposed method has the same resolution as that of an input ... regions are obtained, which are independent of their sizes. ..... impact than those far away from the attention.

  7. Short ensembles: An Efficient Method for Discerning Climate-relevant Sensitivities in Atmospheric General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun

    2014-09-08

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.

  8. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  9. Fiber-coupling efficiency of Gaussian-Schell model beams through an ocean to fiber optical communication link

    Science.gov (United States)

    Hu, Beibei; Shi, Haifeng; Zhang, Yixin

    2018-06-01

    We theoretically study the fiber-coupling efficiency of Gaussian-Schell model beams propagating through oceanic turbulence. The expression of the fiber-coupling efficiency is derived based on the spatial power spectrum of oceanic turbulence and the cross-spectral density function. Our work shows that the salinity fluctuation has a greater impact on the fiber-coupling efficiency than temperature fluctuation does. We can select longer λ in the "ocean window" and higher spatial coherence of light source to improve the fiber-coupling efficiency of the communication link. We also can achieve the maximum fiber-coupling efficiency by choosing design parameter according specific oceanic turbulence condition. Our results are able to help the design of optical communication link for oceanic turbulence to fiber sensor.

  10. The Spatial Mechanism and Drive Mechanism Study of Chinese Urban Efficiency - Based on the Spatial Panel Data Model

    Directory of Open Access Journals (Sweden)

    Yuan Xiaoling

    2016-08-01

    Full Text Available In this article, the urban efficiency factors of 285 Chinese prefecture-level cities in the period from 2003 to 2012 are analyzed by using the spatial econometric model. The result shows that the development of urban efficiency between the cities positively correlates with space. And we conclude that the Industrial Structure, Openness and the Infrastructure can promote the development of such urban efficiency. The Urban Agglomeration Scale, Government Control, Fixed Asset Investment and other factors can inhibit the development of urban efficiency to a certain degree. Therefore, we come to a conclusion that, in the new urbanization construction process, the cities need to achieve cross-regional coordination from the perspective of urban agglomerations and metropolitan development. The efficiency of the city together with the scientific and rational flow of the factors should also be improved.

  11. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  12. Real-time probabilistic covariance tracking with efficient model update.

    Science.gov (United States)

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  13. Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.

    Science.gov (United States)

    Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae

    2013-05-15

    Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.

  14. An efficient model for auxiliary diagnosis of hepatocellular carcinoma based on gene expression programming.

    Science.gov (United States)

    Zhang, Li; Chen, Jiasheng; Gao, Chunming; Liu, Chuanmiao; Xu, Kuihua

    2018-03-16

    Hepatocellular carcinoma (HCC) is a leading cause of cancer-related death worldwide. The early diagnosis of HCC is greatly helpful to achieve long-term disease-free survival. However, HCC is usually difficult to be diagnosed at an early stage. The aim of this study was to create the prediction model to diagnose HCC based on gene expression programming (GEP). GEP is an evolutionary algorithm and a domain-independent problem-solving technique. Clinical data show that six serum biomarkers, including gamma-glutamyl transferase, C-reaction protein, carcinoembryonic antigen, alpha-fetoprotein, carbohydrate antigen 153, and carbohydrate antigen 199, are related to HCC characteristics. In this study, the prediction of HCC was made based on these six biomarkers (195 HCC patients and 215 non-HCC controls) by setting up optimal joint models with GEP. The GEP model discriminated 353 out of 410 subjects, representing a determination coefficient of 86.28% (283/328) and 85.37% (70/82) for training and test sets, respectively. Compared to the results from the support vector machine, the artificial neural network, and the multilayer perceptron, GEP showed a better outcome. The results suggested that GEP modeling was a promising and excellent tool in diagnosis of hepatocellular carcinoma, and it could be widely used in HCC auxiliary diagnosis. Graphical abstract The process to establish an efficient model for auxiliary diagnosis of hepatocellular carcinoma.

  15. A network model shows the importance of coupled processes in the microbial N cycle in the Cape Fear River Estuary

    Science.gov (United States)

    Hines, David E.; Lisa, Jessica A.; Song, Bongkeun; Tobias, Craig R.; Borrett, Stuart R.

    2012-06-01

    Estuaries serve important ecological and economic functions including habitat provision and the removal of nutrients. Eutrophication can overwhelm the nutrient removal capacity of estuaries and poses a widely recognized threat to the health and function of these ecosystems. Denitrification and anaerobic ammonium oxidation (anammox) are microbial processes responsible for the removal of fixed nitrogen and diminish the effects of eutrophication. Both of these microbial removal processes can be influenced by direct inputs of dissolved inorganic nitrogen substrates or supported by microbial interactions with other nitrogen transforming pathways such as nitrification and dissimilatory nitrate reduction to ammonium (DNRA). The coupling of nitrogen removal pathways to other transformation pathways facilitates the removal of some forms of inorganic nitrogen; however, differentiating between direct and coupled nitrogen removal is difficult. Network modeling provides a tool to examine interactions among microbial nitrogen cycling processes and to determine the within-system history of nitrogen involved in denitrification and anammox. To examine the coupling of nitrogen cycling processes, we built a nitrogen budget mass balance network model in two adjacent 1 cm3 sections of bottom water and sediment in the oligohaline portion of the Cape Fear River Estuary, NC, USA. Pathway, flow, and environ ecological network analyses were conducted to characterize the organization of nitrogen flow in the estuary and to estimate the coupling of nitrification to denitrification and of nitrification and DNRA to anammox. Centrality analysis indicated NH4+ is the most important form of nitrogen involved in removal processes. The model analysis further suggested that direct denitrification and coupled nitrification-denitrification had similar contributions to nitrogen removal while direct anammox was dominant to coupled forms of anammox. Finally, results also indicated that partial

  16. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  17. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  18. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-01-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method

  19. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  20. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin

    2014-05-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  1. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2014-01-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  2. Earth system model simulations show different feedback strengths of the terrestrial carbon cycle under glacial and interglacial conditions

    Directory of Open Access Journals (Sweden)

    M. Adloff

    2018-04-01

    Full Text Available In simulations with the MPI Earth System Model, we study the feedback between the terrestrial carbon cycle and atmospheric CO2 concentrations under ice age and interglacial conditions. We find different sensitivities of terrestrial carbon storage to rising CO2 concentrations in the two settings. This result is obtained by comparing the transient response of the terrestrial carbon cycle to a fast and strong atmospheric CO2 concentration increase (roughly 900 ppm in Coupled Climate Carbon Cycle Model Intercomparison Project (C4MIP-type simulations starting from climates representing the Last Glacial Maximum (LGM and pre-industrial times (PI. In this set-up we disentangle terrestrial contributions to the feedback from the carbon-concentration effect, acting biogeochemically via enhanced photosynthetic productivity when CO2 concentrations increase, and the carbon–climate effect, which affects the carbon cycle via greenhouse warming. We find that the carbon-concentration effect is larger under LGM than PI conditions because photosynthetic productivity is more sensitive when starting from the lower, glacial CO2 concentration and CO2 fertilization saturates later. This leads to a larger productivity increase in the LGM experiment. Concerning the carbon–climate effect, it is the PI experiment in which land carbon responds more sensitively to the warming under rising CO2 because at the already initially higher temperatures, tropical plant productivity deteriorates more strongly and extratropical carbon is respired more effectively. Consequently, land carbon losses increase faster in the PI than in the LGM case. Separating the carbon–climate and carbon-concentration effects, we find that they are almost additive for our model set-up; i.e. their synergy is small in the global sum of carbon changes. Together, the two effects result in an overall strength of the terrestrial carbon cycle feedback that is almost twice as large in the LGM experiment

  3. Earth system model simulations show different feedback strengths of the terrestrial carbon cycle under glacial and interglacial conditions

    Science.gov (United States)

    Adloff, Markus; Reick, Christian H.; Claussen, Martin

    2018-04-01

    In simulations with the MPI Earth System Model, we study the feedback between the terrestrial carbon cycle and atmospheric CO2 concentrations under ice age and interglacial conditions. We find different sensitivities of terrestrial carbon storage to rising CO2 concentrations in the two settings. This result is obtained by comparing the transient response of the terrestrial carbon cycle to a fast and strong atmospheric CO2 concentration increase (roughly 900 ppm) in Coupled Climate Carbon Cycle Model Intercomparison Project (C4MIP)-type simulations starting from climates representing the Last Glacial Maximum (LGM) and pre-industrial times (PI). In this set-up we disentangle terrestrial contributions to the feedback from the carbon-concentration effect, acting biogeochemically via enhanced photosynthetic productivity when CO2 concentrations increase, and the carbon-climate effect, which affects the carbon cycle via greenhouse warming. We find that the carbon-concentration effect is larger under LGM than PI conditions because photosynthetic productivity is more sensitive when starting from the lower, glacial CO2 concentration and CO2 fertilization saturates later. This leads to a larger productivity increase in the LGM experiment. Concerning the carbon-climate effect, it is the PI experiment in which land carbon responds more sensitively to the warming under rising CO2 because at the already initially higher temperatures, tropical plant productivity deteriorates more strongly and extratropical carbon is respired more effectively. Consequently, land carbon losses increase faster in the PI than in the LGM case. Separating the carbon-climate and carbon-concentration effects, we find that they are almost additive for our model set-up; i.e. their synergy is small in the global sum of carbon changes. Together, the two effects result in an overall strength of the terrestrial carbon cycle feedback that is almost twice as large in the LGM experiment as in the PI experiment

  4. Actinobacteria from Termite Mounds Show Antiviral Activity against Bovine Viral Diarrhea Virus, a Surrogate Model for Hepatitis C Virus

    Directory of Open Access Journals (Sweden)

    Marina Aiello Padilla

    2015-01-01

    Full Text Available Extracts from termite-associated bacteria were evaluated for in vitro antiviral activity against bovine viral diarrhea virus (BVDV. Two bacterial strains were identified as active, with percentages of inhibition (IP equal to 98%. Both strains were subjected to functional analysis via the addition of virus and extract at different time points in cell culture; the results showed that they were effective as posttreatments. Moreover, we performed MTT colorimetric assays to identify the CC50, IC50, and SI values of these strains, and strain CDPA27 was considered the most promising. In parallel, the isolates were identified as Streptomyces through 16S rRNA gene sequencing analysis. Specifically, CDPA27 was identified as S. chartreusis. The CDPA27 extract was fractionated on a C18-E SPE cartridge, and the fractions were reevaluated. A 100% methanol fraction was identified to contain the compound(s responsible for antiviral activity, which had an SI of 262.41. GC-MS analysis showed that this activity was likely associated with the compound(s that had a peak retention time of 5 min. Taken together, the results of the present study provide new information for antiviral research using natural sources, demonstrate the antiviral potential of Streptomyces chartreusis compounds isolated from termite mounds against BVDV, and lay the foundation for further studies on the treatment of HCV infection.

  5. Modelled seasonal influenza mortality shows marked differences in risk by age, sex, ethnicity and socioeconomic position in New Zealand.

    Science.gov (United States)

    Khieu, Trang Q T; Pierse, Nevil; Telfar-Barnard, Lucy Frances; Zhang, Jane; Huang, Q Sue; Baker, Michael G

    2017-09-01

    Influenza is responsible for a large number of deaths which can only be estimated using modelling methods. Such methods have rarely been applied to describe the major socio-demographic characteristics of this disease burden. We used quasi Poisson regression models with weekly counts of deaths and isolates of influenza A, B and respiratory syncytial virus for the period 1994 to 2008. The estimated average mortality rate was 13.5 per 100,000 people which was 1.8% of all deaths in New Zealand. Influenza mortality differed markedly by age, sex, ethnicity and socioeconomic position. Relatively vulnerable groups were males aged 65-79 years (Rate ratio (RR) = 1.9, 95% CI: 1.9, 1.9 compared with females), Māori (RR = 3.6, 95% CI: 3.6, 3.7 compared with European/Others aged 65-79 years), Pacific (RR = 2.4, 95% CI: 2.4, 2.4 compared with European/Others aged 65-79 years) and those living in the most deprived areas (RR = 1.8, 95% CI: 1.3, 2.4) for New Zealand Deprivation (NZDep) 9&10 (the most deprived) compared with NZDep 1&2 (the least deprived). These results support targeting influenza vaccination and other interventions to the most vulnerable groups, in particular Māori and Pacific people and men aged 65-79 years and those living in the most deprived areas. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  6. Andrographis Paniculata shows anti-nociceptive effects in an animal model of sensory hypersensitivity associated with migraine.

    Science.gov (United States)

    Greco, Rosaria; Siani, Francesca; Demartini, Chiara; Zanaboni, Annamaria; Nappi, Giuseppe; Davinelli, Sergio; Scapagnini, Giovanni; Tassorelli, Cristina

    2016-01-01

    Administration of nitroglycerin (NTG) to rats induces a hyperalgesic condition and neuronal activation of central structures involved in migraine pain. In order to identify therapeutic strategies for migraine pain, we evaluated the anti-nociceptive activity of Andrographis Paniculata (AP), a herbaceous plant, in the hyperalgesia induced by NTG administration in the formalin test. We also analyzed mRNA expression of cytokines in specific brain areas after AP treatment. Male Sprague-Dawley rats were pre-treated with AP extract 30 minutes before NTG or vehicle injection. The data show that AP extract significantly reduced NTG-induced hyperalgesia in phase II of the test, 4 hours after NTG injection. In addition, AP extract reduced IL-6 mRNA expression in the medulla and mesencephalon and also mRNA levels of TNFalpha in the mesencephalic region. These findings suggest that AP extract may be a potential therapeutic approach in the treatment of general pain, and possibly of migraine.

  7. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  8. Efficient Model Order Reduction for the Dynamics of Nonlinear Multilayer Sheet Structures with Trial Vector Derivatives

    Directory of Open Access Journals (Sweden)

    Wolfgang Witteveen

    2014-01-01

    Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.

  9. Steam injection for heavy oil recovery: Modeling of wellbore heat efficiency and analysis of steam injection performance

    International Nuclear Information System (INIS)

    Gu, Hao; Cheng, Linsong; Huang, Shijun; Li, Bokai; Shen, Fei; Fang, Wenchao; Hu, Changhao

    2015-01-01

    Highlights: • A comprehensive mathematical model was established to estimate wellbore heat efficiency of steam injection wells. • A simplified approach of predicting steam pressure in wellbores was proposed. • High wellhead injection rate and wellhead steam quality can improve wellbore heat efficiency. • High wellbore heat efficiency does not necessarily mean good performance of heavy oil recovery. • Using excellent insulation materials is a good way to save water and fuels. - Abstract: The aims of this work are to present a comprehensive mathematical model for estimating wellbore heat efficiency and to analyze performance of steam injection for heavy oil recovery. In this paper, we firstly introduce steam injection process briefly. Secondly, a simplified approach of predicting steam pressure in wellbores is presented and a complete expression for steam quality is derived. More importantly, both direct and indirect methods are adopted to determine the wellbore heat efficiency. Then, the mathematical model is solved using an iterative technique. After the model is validated with measured field data, we study the effects of wellhead injection rate and wellhead steam quality on steam injection performance reflected in wellbores. Next, taking cyclic steam stimulation as an example, we analyze steam injection performance reflected in reservoirs with numerical reservoir simulation method. Finally, the significant role of improving wellbore heat efficiency in saving water and fuels is discussed in detail. The results indicate that we can improve the wellbore heat efficiency by enhancing wellhead injection rate or steam quality. However, high wellbore heat efficiency does not necessarily mean satisfactory steam injection performance reflected in reservoirs or good performance of heavy oil recovery. Moreover, the paper shows that using excellent insulation materials is a good way to save water and fuels due to enhancement of wellbore heat efficiency

  10. Zonulin transgenic mice show altered gut permeability and increased morbidity/mortality in the DSS colitis model.

    Science.gov (United States)

    Sturgeon, Craig; Lan, Jinggang; Fasano, Alessio

    2017-06-01

    Increased small intestinal permeability (IP) has been proposed to be an integral element, along with genetic makeup and environmental triggers, in the pathogenies of chronic inflammatory diseases (CIDs). We identified zonulin as a master regular of intercellular tight junctions linked to the development of several CIDs. We aim to study the role of zonulin-mediated IP in the pathogenesis of CIDs. Zonulin transgenic Hp2 mice (Ztm) were subjected to dextran sodium sulfate (DSS) treatment for 7 days, followed by 4-7 days' recovery and compared to C57Bl/6 (wild-type (WT)) mice. IP was measured in vivo and ex vivo, and weight, histology, and survival were monitored. To mechanistically link zonulin-dependent impairment of small intestinal barrier function with clinical outcome, Ztm were treated with the zonulin inhibitor AT1001 added to drinking water in addition to DSS. We observed increased morbidity (more pronounced weight loss and colitis) and mortality (40-70% compared with 0% in WT) at 11 days post-DSS treatment in Ztm compared with WT mice. Both in vivo and ex vivo measurements showed an increased IP at baseline in Ztm compared to WT mice, which was exacerbated by DSS treatment and was associated with upregulation of zonulin gene expression (fourfold in the duodenum, sixfold in the jejunum). Treatment with AT1001 prevented the DSS-induced increased IP both in vivo and ex vivo without changing zonulin gene expression and completely reverted morbidity and mortality in Ztm. Our data show that zonulin-dependent small intestinal barrier impairment is an early step leading to the break of tolerance with subsequent development of CIDs. © 2017 New York Academy of Sciences.

  11. System convergence in transport models: algorithms efficiency and output uncertainty

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2015-01-01

    of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed...... much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...

  12. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  13. A branch-heterogeneous model of protein evolution for efficient inference of ancestral sequences.

    Science.gov (United States)

    Groussin, M; Boussau, B; Gouy, M

    2013-07-01

    Most models of nucleotide or amino acid substitution used in phylogenetic studies assume that the evolutionary process has been homogeneous across lineages and that composition of nucleotides or amino acids has remained the same throughout the tree. These oversimplified assumptions are refuted by the observation that compositional variability characterizes extant biological sequences. Branch-heterogeneous models of protein evolution that account for compositional variability have been developed, but are not yet in common use because of the large number of parameters required, leading to high computational costs and potential overparameterization. Here, we present a new branch-nonhomogeneous and nonstationary model of protein evolution that captures more accurately the high complexity of sequence evolution. This model, henceforth called Correspondence and likelihood analysis (COaLA), makes use of a correspondence analysis to reduce the number of parameters to be optimized through maximum likelihood, focusing on most of the compositional variation observed in the data. The model was thoroughly tested on both simulated and biological data sets to show its high performance in terms of data fitting and CPU time. COaLA efficiently estimates ancestral amino acid frequencies and sequences, making it relevant for studies aiming at reconstructing and resurrecting ancestral amino acid sequences. Finally, we applied COaLA on a concatenate of universal amino acid sequences to confirm previous results obtained with a nonhomogeneous Bayesian model regarding the early pattern of adaptation to optimal growth temperature, supporting the mesophilic nature of the Last Universal Common Ancestor.

  14. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  15. Efficient Lattice-Based Signcryption in Standard Model

    Directory of Open Access Journals (Sweden)

    Jianhua Yan

    2013-01-01

    Full Text Available Signcryption is a cryptographic primitive that can perform digital signature and public encryption simultaneously at a significantly reduced cost. This advantage makes it highly useful in many applications. However, most existing signcryption schemes are seriously challenged by the booming of quantum computations. As an interesting stepping stone in the post-quantum cryptographic community, two lattice-based signcryption schemes were proposed recently. But both of them were merely proved to be secure in the random oracle models. Therefore, the main contribution of this paper is to propose a new lattice-based signcryption scheme that can be proved to be secure in the standard model.

  16. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  17. Combining climate and energy policies: synergies or antagonism? Modeling interactions with energy efficiency instruments

    International Nuclear Information System (INIS)

    Lecuyer, Oskar; Bibas, Ruben

    2012-01-01

    In addition to the already present Climate and Energy package, the European Union (EU) plans to include a binding target to reduce energy consumption. We analyze the rationales the EU invokes to justify such an overlapping and develop a minimal common framework to study interactions arising from the combination of instruments reducing emissions, promoting renewable energy (RE) production and reducing energy demand through energy efficiency (EE) investments. We find that although all instruments tend to reduce GHG emissions and although a price on carbon tends also to give the right incentives for RE and EE, the combination of more than one instrument leads to significant antagonisms regarding major objectives of the policy package. The model allows to show in a single framework and to quantify the antagonistic effects of the joint promotion of RE and EE. We also show and quantify the effects of this joint promotion on ETS permit price, on wholesale market price and on energy production levels. (authors)

  18. Individual Diet Modeling Shows How to Balance the Diet of French Adults with or without Excessive Free Sugar Intakes.

    Science.gov (United States)

    Lluch, Anne; Maillot, Matthieu; Gazan, Rozenn; Vieux, Florent; Delaere, Fabien; Vaudaine, Sarah; Darmon, Nicole

    2017-02-20

    Dietary changes needed to achieve nutritional adequacy for 33 nutrients were determined for 1719 adults from a representative French national dietary survey. For each individual, an iso-energy nutritionally adequate diet was generated using diet modeling, staying as close as possible to the observed diet. The French food composition table was completed with free sugar (FS) content. Results were analyzed separately for individuals with FS intakes in their observed diets ≤10% or >10% of their energy intake (named below FS-ACCEPTABLE and FS-EXCESS, respectively). The FS-EXCESS group represented 41% of the total population (average energy intake of 14.2% from FS). Compared with FS-ACCEPTABLE individuals, FS-EXCESS individuals had diets of lower nutritional quality and consumed more energy (2192 vs. 2123 kcal/day), particularly during snacking occasions (258 vs. 131 kcal/day) (all p -values diets were significant increases in fresh fruits, starchy foods, water, hot beverages and plain yogurts; and significant decreases in mixed dishes/sandwiches, meat/eggs/fish and cheese. For FS-EXCESS individuals only, the optimization process significantly increased vegetables and significantly decreased sugar-sweetened beverages, sweet products and fruit juices. The diets of French adults with excessive intakes of FS are of lower nutritional quality, but can be optimized via specific dietary changes.

  19. Landscape evolution models using the stream power incision model show unrealistic behavior when m ∕ n equals 0.5

    Directory of Open Access Journals (Sweden)

    J. S. Kwang

    2017-12-01

    Full Text Available Landscape evolution models often utilize the stream power incision model to simulate river incision: E = KAmSn, where E is the vertical incision rate, K is the erodibility constant, A is the upstream drainage area, S is the channel gradient, and m and n are exponents. This simple but useful law has been employed with an imposed rock uplift rate to gain insight into steady-state landscapes. The most common choice of exponents satisfies m ∕ n = 0.5. Yet all models have limitations. Here, we show that when hillslope diffusion (which operates only on small scales is neglected, the choice m ∕ n = 0.5 yields a curiously unrealistic result: the predicted landscape is invariant to horizontal stretching. That is, the steady-state landscape for a 10 km2 horizontal domain can be stretched so that it is identical to the corresponding landscape for a 1000 km2 domain.

  20. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  1. An empirical investigation of the efficiency effects of integrated care models in Switzerland

    Directory of Open Access Journals (Sweden)

    Oliver Reich

    2012-01-01

    Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.

  2. Transport Modeling Analysis to Test the Efficiency of Fish Markets in Oman

    Directory of Open Access Journals (Sweden)

    Khamis S. Al-Abri

    2009-01-01

    Full Text Available Oman’s fish exports have shown an increasing trend while supplies to the domestic market have declined, despite increased domestic demand caused by population growth and income. This study hypothesized that declining fish supplies to domestic markets were due to inefficiency of the transport function of the fish marketing system in Oman. The hypothesis was tested by comparing the observed prices of several fish species at several markets with optimal prices. The optimal prices were estimated by the dual of a fish transport cost- minimizing linear programming model. Primary data on market prices and transportation costs and quantities transported were gathered through a survey of a sample of fish transporters. The quantity demanded at market sites was estimated using secondary data. The analysis indicated that the differences between the observed prices and the estimated optimal prices were not significantly different showing that the transport function of fish markets in Oman is efficient. This implies that the increasing trend of fish exports vis-à-vis the decreasing trend of supplies to domestic markets is rational and will continue. This may not be considered to be equitable but it is efficient and may have long-term implications for national food security and have an adverse impact on the nutritional and health status of the rural poor population. Policy makers may have to recognize the trade off between the efficiency and equity implications of the fish markets in Oman and make policy decisions accordingly in order to ensure national food security.

  3. Particle capture efficiency in a multi-wire model for high gradient magnetic separation

    KAUST Repository

    Eisenträger, Almut

    2014-07-21

    High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle\\'s entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separation cycles may increase efficiency. © 2014 AIP Publishing LLC.

  4. Proteasomes remain intact, but show early focal alteration in their composition in a mouse model of amyotrophic lateral sclerosis.

    Science.gov (United States)

    Kabashi, Edor; Agar, Jeffrey N; Hong, Yu; Taylor, David M; Minotti, Sandra; Figlewicz, Denise A; Durham, Heather D

    2008-06-01

    In amyotrophic lateral sclerosis caused by mutations in Cu/Zn-superoxide dismutase (SOD1), altered solubility and aggregation of the mutant protein implicates failure of pathways for detecting and catabolizing misfolded proteins. Our previous studies demonstrated early reduction of proteasome-mediated proteolytic activity in lumbar spinal cord of SOD1(G93A) transgenic mice, tissue particularly vulnerable to disease. The purpose of this study was to identify any underlying abnormalities in proteasomal structure. In lumbar spinal cord of pre-symptomatic mice [postnatal day 45 (P45) and P75], normal levels of structural 20S alpha subunits were incorporated into 20S/26S proteasomes; however, proteasomal complexes separated by native gel electrophoresis showed decreased immunoreactivity with antibodies to beta3, a structural subunit of the 20S proteasome core, and beta5, the subunit with chymotrypsin-like activity. This occurred prior to increase in beta5i immunoproteasomal subunit. mRNA levels were maintained and no association of mutant SOD1 with proteasomes was identified, implicating post-transcriptional mechanisms. mRNAs also were maintained in laser captured motor neurons at a later stage of disease (P100) in which multiple 20S proteins are reduced relative to the surrounding neuropil. Increase in detergent-insoluble, ubiquitinated proteins at P75 provided further evidence of stress on mechanisms of protein quality control in multiple cell types prior to significant motor neuron death.

  5. An orally available, small-molecule polymerase inhibitor shows efficacy against a lethal morbillivirus infection in a large animal model.

    Science.gov (United States)

    Krumm, Stefanie A; Yan, Dan; Hovingh, Elise S; Evers, Taylor J; Enkirch, Theresa; Reddy, G Prabhakar; Sun, Aiming; Saindane, Manohar T; Arrendale, Richard F; Painter, George; Liotta, Dennis C; Natchus, Michael G; von Messling, Veronika; Plemper, Richard K

    2014-04-16

    Measles virus is a highly infectious morbillivirus responsible for major morbidity and mortality in unvaccinated humans. The related, zoonotic canine distemper virus (CDV) induces morbillivirus disease in ferrets with 100% lethality. We report an orally available, shelf-stable pan-morbillivirus inhibitor that targets the viral RNA polymerase. Prophylactic oral treatment of ferrets infected intranasally with a lethal CDV dose reduced viremia and prolonged survival. Ferrets infected with the same dose of virus that received post-infection treatment at the onset of viremia showed low-grade viral loads, remained asymptomatic, and recovered from infection, whereas control animals succumbed to the disease. Animals that recovered also mounted a robust immune response and were protected against rechallenge with a lethal CDV dose. Drug-resistant viral recombinants were generated and found to be attenuated and transmission-impaired compared to the genetic parent virus. These findings may pioneer a path toward an effective morbillivirus therapy that could aid measles eradication by synergizing with vaccination to close gaps in herd immunity due to vaccine refusal.

  6. Efficiency of Motivation Development Models for Hygienic Skills

    Directory of Open Access Journals (Sweden)

    Alexander V. Tscymbalystov

    2017-09-01

    Full Text Available The combined influence of a family and a state plays an important role in the development of an individual. This study is aimed at the model effectiveness evaluation concerning the development of oral hygiene skills among children living in families (n = 218 and being under the care of a state (n = 229. The groups were created among the children who took part in the study: the preschoolers of 5-7 years, schoolchildren of 8-11 years and adolescents of 12-15 years. During the initial examination, the hygienic status of the oral cavity before and after tooth brushing was evaluated. After that, subgroups were formed in each age group according to three models of hygienic skills training: 1 computer presentation lesson; 2 one of the students acted as a demonstrator of the skill; 3 an individual training by a hygienist. During the next 48 hours children did not take hygienic measures. Then the children were invited for a control session to demonstrate the acquired skills of oral care and evaluate the effectiveness of a model developing the skills of individual oral hygiene. During the control examination, the hygienic status was determined before and after the tooth cleaning, which allowed to determine the regimes of hygienic measure performance for children with different social status and the effectiveness of hygiene training models.

  7. Islamic vs. conventional banks : Business models, efficiency and stability

    NARCIS (Netherlands)

    Beck, T.H.L.; Demirgüc-Kunt, A.; Merrouche, O.

    2013-01-01

    How different are Islamic banks from conventional banks? Does the recent crisis justify a closer look at the Sharia-compliant business model for banking? When comparing conventional and Islamic banks, controlling for time-variant country-fixed effects, we find few significant differences in business

  8. Operator-based linearization for efficient modeling of geothermal processes

    NARCIS (Netherlands)

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical

  9. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  10. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  11. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  12. A model for store handling : potential for efficiency improvement

    NARCIS (Netherlands)

    Zelst, van S.M.; Donselaar, van K.H.; Woensel, van T.; Broekmeulen, R.A.C.M.; Fransoo, J.C.

    2005-01-01

    In retail stores, handling of products typically forms the largest share of the operational costs. The handling activities are mainly the stacking of the products on the shelves. While the impact of these costs on the profitability of a store is substantial, there are no models available of the

  13. Efficient Beam-Type Structural Modeling of Rotor Blades

    DEFF Research Database (Denmark)

    Couturier, Philippe; Krenk, Steen

    2015-01-01

    The present paper presents two recently developed numerical formulations which enable accurate representation of the static and dynamic behaviour of wind turbine rotor blades using little modeling and computational effort. The first development consists of an intuitive method to extract fully...... by application to a composite section with bend-twist coupling and a real wind turbine blade....

  14. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  15. Efficient Proof Engines for Bounded Model Checking of Hybrid Systems

    DEFF Research Database (Denmark)

    Fränzle, Martin; Herde, Christian

    2005-01-01

    In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...

  16. Practice What You Preach: Microfinance Business Models and Operational Efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    The microfinance sector has room for pure for-profit microfinance institutions (MFIs), non-profit organizations, and “social” for-profit firms that aim to pursue a double bottom line. Depending on their business model, these institutions target different types of borrowers, change the size of their

  17. Practice what you preach: Microfinance business models and operational efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    2013-01-01

    The microfinance sector is an example of a sector in which firms with different business models coexist. Next to pure for-profit microfinance institutions (MFIs), the sector has room for non-profit organizations, and includes 'social' for-profit firms that aim to maximize a double bot- tom line and

  18. Modelling the Italian household sector at the municipal scale: Micro-CHP, renewables and energy efficiency

    International Nuclear Information System (INIS)

    Comodi, Gabriele; Cioccolanti, Luca; Renzi, Massimiliano

    2014-01-01

    This study investigates the potential of energy efficiency, renewables, and micro-cogeneration to reduce household consumption in a medium Italian town and analyses the scope for municipal local policies. The study also investigates the effects of tourist flows on town's energy consumption by modelling energy scenarios for permanent and summer homes. Two long-term energy scenarios (to 2030) were modelled using the MarkAL-TIMES generator model: BAU (business as usual), which is the reference scenario, and EHS (exemplary household sector), which involves targets of penetration for renewables and micro-cogeneration. The analysis demonstrated the critical role of end-use energy efficiency in curbing residential consumption. Cogeneration and renewables (PV (photovoltaic) and solar thermal panels) were proven to be valuable solutions to reduce the energetic and environmental burden of the household sector (−20% in 2030). Because most of household energy demand is ascribable to space-heating or hot water production, this study finds that micro-CHP technologies with lower power-to-heat ratios (mainly, Stirling engines and microturbines) show a higher diffusion, as do solar thermal devices. The spread of micro-cogeneration implies a global reduction of primary energy but involves the internalisation of the primary energy, and consequently CO 2 emissions, previously consumed in a centralised power plant within the municipality boundaries. - Highlights: • Energy consumption in permanent homes can be reduced by 20% in 2030. • High efficiency appliances have different effect according to their market penetration. • Use of electrical heat pumps shift consumption from natural gas to electricity. • Micro-CHP entails a global reduction of energy consumption but greater local emissions. • The main CHP technologies entering the residential market are Stirling and μ-turbines

  19. Amniotic fluid stem cells with low γ-interferon response showed behavioral improvement in Parkinsonism rat model.

    Directory of Open Access Journals (Sweden)

    Yu-Jen Chang

    Full Text Available Amniotic fluid stem cells (AFSCs are multipotent stem cells that may be used in transplantation medicine. In this study, AFSCs established from amniocentesis were characterized on the basis of surface marker expression and differentiation potential. To further investigate the properties of AFSCs for translational applications, we examined the cell surface expression of human leukocyte antigens (HLA of these cells and estimated the therapeutic effect of AFSCs in parkinsonian rats. The expression profiles of HLA-II and transcription factors were compared between AFSCs and bone marrow-derived mesenchymal stem cells (BMMSCs following treatment with γ-IFN. We found that stimulation of AFSCs with γ-IFN prompted only a slight increase in the expression of HLA-Ia and HLA-E, and the rare HLA-II expression could also be observed in most AFSCs samples. Consequently, the expression of CIITA and RFX5 was weakly induced by γ-IFN stimulation of AFSCs compared to that of BMMSCs. In the transplantation test, Sprague Dawley rats with 6-hydroxydopamine lesioning of the substantia nigra were used as a parkinsonian-animal model. Following the negative γ-IFN response AFSCs injection, apomorphine-induced rotation was reduced by 75% in AFSCs engrafted parkinsonian rats but was increased by 53% in the control group after 12-weeks post-transplantation. The implanted AFSCs were viable, and were able to migrate into the brain's circuitry and express specific proteins of dopamine neurons, such as tyrosine hydroxylase and dopamine transporter. In conclusion, the relative insensitivity AFSCs to γ-IFN implies that AFSCs might have immune-tolerance in γ-IFN inflammatory conditions. Furthermore, the effective improvement of AFSCs transplantation for apomorphine-induced rotation paves the way for the clinical application in parkinsonian therapy.

  20. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  1. Development of an empirical model of turbine efficiency using the Taylor expansion and regression analysis

    International Nuclear Information System (INIS)

    Fang, Xiande; Xu, Yu

    2011-01-01

    The empirical model of turbine efficiency is necessary for the control- and/or diagnosis-oriented simulation and useful for the simulation and analysis of dynamic performances of the turbine equipment and systems, such as air cycle refrigeration systems, power plants, turbine engines, and turbochargers. Existing empirical models of turbine efficiency are insufficient because there is no suitable form available for air cycle refrigeration turbines. This work performs a critical review of empirical models (called mean value models in some literature) of turbine efficiency and develops an empirical model in the desired form for air cycle refrigeration, the dominant cooling approach in aircraft environmental control systems. The Taylor series and regression analysis are used to build the model, with the Taylor series being used to expand functions with the polytropic exponent and the regression analysis to finalize the model. The measured data of a turbocharger turbine and two air cycle refrigeration turbines are used for the regression analysis. The proposed model is compact and able to present the turbine efficiency map. Its predictions agree with the measured data very well, with the corrected coefficient of determination R c 2 ≥ 0.96 and the mean absolute percentage deviation = 1.19% for the three turbines. -- Highlights: → Performed a critical review of empirical models of turbine efficiency. → Developed an empirical model in the desired form for air cycle refrigeration, using the Taylor expansion and regression analysis. → Verified the method for developing the empirical model. → Verified the model.

  2. Management Model for efficient quality control in new buildings

    Directory of Open Access Journals (Sweden)

    C. E. Rodríguez-Jiménez

    2017-09-01

    Full Text Available The management of the quality control of each building process is usually set up in Spain from different levels of demand. This work tries to obtain a model of reference, to compare the quality control of the building process of a specific product (building, and to be able to evaluate its warranty level. In the quest of this purpose, we take credit of specialized sources and 153 real cases of Quality Control were carefully revised using a multi-judgment method. Applying different techniques to get a specific valuation (impartial of the input parameters through Delphi’s method (17 experts query, whose matrix treatment with the Fuzzy-QFD tool condenses numerical references through a weighted distribution of the selected functions and their corresponding conditioning factors. The model thus obtained (M153 is useful in order to have a quality control reference to meet the expectations of the quality.

  3. Efficient Transdermal Delivery of Benfotiamine in an Animal Model

    OpenAIRE

    Varadi, Gyula; Zhu, Zhen; G. Carter, Stephen

    2015-01-01

    We designed a transdermal system to serve as a delivery platform for benfotiamine utilizing the attributes of passive penetration enhancing molecules to penetrate through the outer layers of skin combined with the advance of incorporating various peripherally-acting vasodilators to enhance drug uptake.  Benfotiamine, incorporated into this transdermal formulation, was applied to skin in an animal model in order to determine the ability to deliver this thiamine pro-drug effectively to the sub-...

  4. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Directory of Open Access Journals (Sweden)

    H. Wan

    2014-09-01

    Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of

  5. Resource competition model predicts zonation and increasing nutrient use efficiency along a wetland salinity gradient

    Science.gov (United States)

    Schoolmaster, Donald; Stagg, Camille L.

    2018-01-01

    A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.

  6. Assessing Green Development Efficiency of Municipalities and Provinces in China Integrating Models of Super-Efficiency DEA and Malmquist Index

    Directory of Open Access Journals (Sweden)

    Qing Yang

    2015-04-01

    Full Text Available In order to realize economic and social green development, to pave a pathway towards China’s green regional development and develop effective scientific policy to assist in building green cities and countries, it is necessary to put forward a relatively accurate, scientific and concise green assessment method. The research uses the CCR (A. Charnes & W. W. Cooper & E. Rhodes Data Envelopment Analysis (DEA model to obtain the green development frontier surface based on 31 regions’ annual cross-section data from 2008–2012. Furthermore, in order to classify the regions whereby assessment values equal to 1 in the CCR model, we chose the Super-Efficiency DEA model for further sorting. Meanwhile, according to the five-year panel data, the green development efficiency changes of 31 regions can be manifested by the Malmquist index. Finally, the study assesses the reasons for regional differences; while analyzing and discussing the results may allude to a superior green development pathway for China.

  7. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  8. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  9. Efficient occupancy model-fitting for extensive citizen-science data

    Science.gov (United States)

    Morgan, Byron J. T.; Freeman, Stephen N.; Ridout, Martin S.; Brereton, Tom M.; Fox, Richard; Powney, Gary D.; Roy, David B.

    2017-01-01

    Appropriate large-scale citizen-science data present important new opportunities for biodiversity modelling, due in part to the wide spatial coverage of information. Recently proposed occupancy modelling approaches naturally incorporate random effects in order to account for annual variation in the composition of sites surveyed. In turn this leads to Bayesian analysis and model fitting, which are typically extremely time consuming. Motivated by presence-only records of occurrence from the UK Butterflies for the New Millennium data base, we present an alternative approach, in which site variation is described in a standard way through logistic regression on relevant environmental covariates. This allows efficient occupancy model-fitting using classical inference, which is easily achieved using standard computers. This is especially important when models need to be fitted each year, typically for many different species, as with British butterflies for example. Using both real and simulated data we demonstrate that the two approaches, with and without random effects, can result in similar conclusions regarding trends. There are many advantages to classical model-fitting, including the ability to compare a range of alternative models, identify appropriate covariates and assess model fit, using standard tools of maximum likelihood. In addition, modelling in terms of covariates provides opportunities for understanding the ecological processes that are in operation. We show that there is even greater potential; the classical approach allows us to construct regional indices simply, which indicate how changes in occupancy typically vary over a species’ range. In addition we are also able to construct dynamic occupancy maps, which provide a novel, modern tool for examining temporal changes in species distribution. These new developments may be applied to a wide range of taxa, and are valuable at a time of climate change. They also have the potential to motivate citizen

  10. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  11. Model-based efficiency evaluation of combine harvester traction drives

    Directory of Open Access Journals (Sweden)

    Steffen Häberle

    2015-08-01

    Full Text Available As part of the research the drive train of the combine harvesters is investigated in detail. The focus on load and power distribution, energy consumption and usage distribution are explicitly explored on two test machines. Based on the lessons learned during field operations, model-based studies of energy saving potential in the traction train of combine harvesters can now be quantified. Beyond that the virtual machine trial provides an opportunity to compare innovative drivetrain architectures and control solutions under reproducible conditions. As a result, an evaluation method is presented and generically used to draw comparisons under local representative operating conditions.

  12. Study on the Technical Efficiency of Creative Human Capital in China by Three-Stage Data Envelopment Analysis Model

    Directory of Open Access Journals (Sweden)

    Jian Ma

    2014-01-01

    Full Text Available Previous researches have proved the positive effect of creative human capital and its development on the development of economy. Yet, the technical efficiency of creative human capital and its effects are still under research. The authors are trying to estimate the technical efficiency value in Chinese context, which is adjusted by the environmental variables and statistical noises, by establishing a three-stage data envelopment analysis model, using data from 2003 to 2010. The research results indicate that, in this period, the entirety of creative human capital in China and the technical efficiency value in different regions and different provinces is still in the low level and could be promoted. Otherwise, technical non-efficiency is mostly derived from the scale nonefficiency and rarely affected by pure technical efficiency. The research also examines environmental variables’ marked effects on the technical efficiency, and it shows that different environmental variables differ in the aspect of their own effects. The expansion of the scale of education, development of healthy environment, growth of GDP, development of skill training, and population migration could reduce the input of creative human capital and promote the technical efficiency, while development of trade and institutional change, on the contrary, would block the input of creative human capital and the promotion the technical efficiency.

  13. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  14. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  15. Measurement and decomposition of energy efficiency of Northeast China-based on super efficiency DEA model and Malmquist index.

    Science.gov (United States)

    Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo

    2017-08-01

    Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.

  16. IS CAPM AN EFFICIENT MODEL? ADVANCED VERSUS EMERGING MARKETS

    Directory of Open Access Journals (Sweden)

    Iulian IHNATOV

    2015-10-01

    Full Text Available CAPM is one of the financial models most widely used by the investors all over the world for analyzing the correlation between risk and return, being considered a milestone in financial literature. However, in recently years it has been criticized for the unrealistic assumptions it is based on and for the fact that the expected returns it forecasts are wrong. The aim of this paper is to test statistically CAPM for a set of shares listed on New York Stock Exchange, Nasdaq, Warsaw Stock Exchange and Bucharest Stock Exchange (developed markets vs. emerging markets and to compare the expected returns resulted from CAPM with the actually returns. Thereby, we intend to verify whether the model is verified for Central and Eastern Europe capital market, mostly dominated by Poland, and whether the Polish and Romanian stock market index may faithfully be represented as market portfolios. Moreover, we intend to make a comparison between the results for Poland and Romania. After carrying out the analysis, the results confirm that the CAPM is statistically verified for all three capital markets, but it fails to correctly forecast the expected returns. This means that the investors can take wrong investments, bringing large loses to them.

  17. Energy efficiency and integrated resource planning - lessons drawn from the Californian model

    International Nuclear Information System (INIS)

    Baudry, P.

    2008-01-01

    The principle of integrated resource planning (IRP) is to consider, on the same level, investments which aim to produce energy and those which enable energy requirements to be reduced. According to this principle, the energy efficiency programmes, which help to reduce energy demand and CO 2 emissions, are considered as an economically appreciated resource. The costs and gains of this resource are evaluated and compared to those relating to energy production. California has adopted an IRP since 1990 and ranks energy efficiency highest among the available energy resources, since economic evaluations show that the cost of realizing a saving of one kWh is lower than that which corresponds to its production. Yet this energy policy model is not universally widespread over the world. This can be explained by several reasons. Firstly, a reliable economic appreciation of energy savings presupposes that great uncertainties will be raised linked to the measurement of energy savings, which emanates in articular from the different possible options for the choice of base reference. This disinterest for IRP in Europe can also be explained by an institutional context of energy market liberalization which does not promote this type of regulation, as well as by the concern of making energy supply security the policies' top priority. Lastly, the remuneration of economic players investing in the energy efficiency programmes is an indispensable condition for its quantitative recognition in national investment planning. In France, the process of multi-annual investment programming is a mechanism which could lead to energy efficiency being included as a resource with economically appreciated investments. (author)

  18. Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%

    KAUST Repository

    Beiley, Zach M.

    2012-01-01

    It is estimated that for photovoltaics to reach grid parity around the planet, they must be made with costs under $0.50 per W p and must also achieve power conversion efficiencies above 20% in order to keep installation costs down. In this work we explore a novel solar cell architecture, a hybrid tandem photovoltaic (HTPV), and show that it is capable of meeting these targets. HTPV is composed of an inexpensive and low temperature processed solar cell, such as an organic or dye-sensitized solar cell, that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost of the modules by ∼15% to 20% and the cost of installation by up to 30%. This suggests that HTPV is a promising option for producing solar power that matches the cost of existing grid energy. © 2012 The Royal Society of Chemistry.

  19. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  20. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  1. Pipeline for Efficient Mapping of Transcription Factor Binding Sites and Comparison of Their Models

    KAUST Repository

    Ba alawi, Wail

    2011-06-01

    The control of genes in every living organism is based on activities of transcription factor (TF) proteins. These TFs interact with DNA by binding to the TF binding sites (TFBSs) and in that way create conditions for the genes to activate. Of the approximately 1500 TFs in human, TFBSs are experimentally derived only for less than 300 TFs and only in generally limited portions of the genome. To be able to associate TF to genes they control we need to know if TFs will have a potential to interact with the control region of the gene. For this we need to have models of TFBS families. The existing models are not sufficiently accurate or they are too complex for use by ordinary biologists. To remove some of the deficiencies of these models, in this study we developed a pipeline through which we achieved the following: 1. Through a comparison analysis of the performance we identified the best models with optimized thresholds among the four different types of models of TFBS families. 2. Using the best models we mapped TFBSs to the human genome in an efficient way. The study shows that a new scoring function used with TFBS models based on the position weight matrix of dinucleotides with remote dependency results in better accuracy than the other three types of the TFBS models. The speed of mapping has been improved by developing a parallelized code and shows a significant speed up of 4x when going from 1 CPU to 8 CPUs. To verify if the predicted TFBSs are more accurate than what can be expected with the conventional models, we identified the most frequent pairs of TFBSs (for TFs E4F1 and ATF6) that appeared close to each other (within the distance of 200 nucleotides) over the human genome. We show unexpectedly that the genes that are most close to the multiple pairs of E4F1/ATF6 binding sites have a co-expression of over 90%. This indirectly supports our hypothesis that the TFBS models we use are more accurate and also suggests that the E4F1/ATF6 pair is exerting the

  2. Modeling and optimization of processes for clean and efficient pulverized coal combustion in utility boilers

    Directory of Open Access Journals (Sweden)

    Belošević Srđan V.

    2016-01-01

    Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in

  3. Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.

    Science.gov (United States)

    Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar

    2017-03-01

    Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Flexible, Efficient Binomial Mixed Model for Identifying Differential DNA Methylation in Bisulfite Sequencing Data

    Science.gov (United States)

    Lea, Amanda J.

    2015-01-01

    Identifying sources of variation in DNA methylation levels is important for understanding gene regulation. Recently, bisulfite sequencing has become a popular tool for investigating DNA methylation levels. However, modeling bisulfite sequencing data is complicated by dramatic variation in coverage across sites and individual samples, and because of the computational challenges of controlling for genetic covariance in count data. To address these challenges, we present a binomial mixed model and an efficient, sampling-based algorithm (MACAU: Mixed model association for count data via data augmentation) for approximate parameter estimation and p-value computation. This framework allows us to simultaneously account for both the over-dispersed, count-based nature of bisulfite sequencing data, as well as genetic relatedness among individuals. Using simulations and two real data sets (whole genome bisulfite sequencing (WGBS) data from Arabidopsis thaliana and reduced representation bisulfite sequencing (RRBS) data from baboons), we show that our method provides well-calibrated test statistics in the presence of population structure. Further, it improves power to detect differentially methylated sites: in the RRBS data set, MACAU detected 1.6-fold more age-associated CpG sites than a beta-binomial model (the next best approach). Changes in these sites are consistent with known age-related shifts in DNA methylation levels, and are enriched near genes that are differentially expressed with age in the same population. Taken together, our results indicate that MACAU is an efficient, effective tool for analyzing bisulfite sequencing data, with particular salience to analyses of structured populations. MACAU is freely available at www.xzlab.org/software.html. PMID:26599596

  5. Modeling efficiency and water balance in PEM fuel cell systems with liquid fuel processing and hydrogen membranes

    Science.gov (United States)

    Pearlman, Joshua B.; Bhargav, Atul; Shields, Eric B.; Jackson, Gregory S.; Hearn, Patrick L.

    Integrating PEM fuel cells effectively with liquid hydrocarbon reforming requires careful system analysis to assess trade-offs associated with H 2 production, purification, and overall water balance. To this end, a model of a PEM fuel cell system integrated with an autothermal reformer for liquid hydrocarbon fuels (modeled as C 12H 23) and with H 2 purification in a water-gas-shift/membrane reactor is developed to do iterative calculations for mass, species, and energy balances at a component and system level. The model evaluates system efficiency with parasitic loads (from compressors, pumps, and cooling fans), system water balance, and component operating temperatures/pressures. Model results for a 5-kW fuel cell generator show that with state-of-the-art PEM fuel cell polarization curves, thermal efficiencies >30% can be achieved when power densities are low enough for operating voltages >0.72 V per cell. Efficiency can be increased by operating the reformer at steam-to-carbon ratios as high as constraints related to stable reactor temperatures allow. Decreasing ambient temperature improves system water balance and increases efficiency through parasitic load reduction. The baseline configuration studied herein sustained water balance for ambient temperatures ≤35 °C at full power and ≤44 °C at half power with efficiencies approaching ∼27 and ∼30%, respectively.

  6. Efficient Transdermal Delivery of Benfotiamine in an Animal Model

    Directory of Open Access Journals (Sweden)

    Gyula Varadi

    2015-01-01

    Full Text Available We designed a transdermal system to serve as a delivery platform for benfotiamine utilizing the attributes of passive penetration enhancing molecules to penetrate through the outer layers of skin combined with the advance of incorporating various peripherally-acting vasodilators to enhance drug uptake.  Benfotiamine, incorporated into this transdermal formulation, was applied to skin in an animal model in order to determine the ability to deliver this thiamine pro-drug effectively to the sub-epithelial layers.  In this proof of concept study in guinea pigs, we found that a single topical application of either a solubilized form of benfotiamine (15 mg or a microcrystalline suspension form (25 mg resulted in considerable increases of the dephosphorylated benfotiamine (S-benzoylthiamine in the skin tissue as well as in significant increases in the thiamine and thiamine phosphate pools compared to control animals.  The presence of a ~8000x increase in thiamine and increases in its phosphorylated derivatives in the epidermis and dermis tissue of the test animals gives a strong indication that the topical treatment with benfotiamine works very well for the desired outcome of producing an intracellular increase of the activating cofactor pool for transketolase enzyme, which is implicated in the pathophysiology of diabetic neuropathy.

  7. LEADERSHIP MODELS AND EFFICIENCY IN DECISION CRISIS SITUATIONS, DURING DISASTERS

    Directory of Open Access Journals (Sweden)

    JAIME RIQUELME CASTAÑEDA

    2017-09-01

    Full Text Available This article explains how an effective leadership is made on a team during an emergency, during a decision crisis in the context of a disaster. From the approach of the process, we analyze some variables such as flexibility, value congruence, rationality, politicization, and quality of design. To achieve that, we made a fi eld work with the information obtained from the three Emergency headquarters deployed by the Chilean Armed Forces, due to the effects of the 8.8 earthquake on February 27th 2010. The data is analyzed through econometric technics. The results suggested that the original ideas and the rigorous analysis are the keys to secure the quality of the decision. It also, made possible to unveil the fact, that to have efficiency in operations in a disaster, it requires a big presence of a vision, mission, and inspiration about a solid and pre-existing base of goals and motivations. Finally, we can fi nd the support to the relationship between kinds of leadership and efficiency on crisis decision-making process of the disaster and opens a space to build a decision making theoretic model.

  8. Energy efficiency optimisation for distillation column using artificial neural network models

    International Nuclear Information System (INIS)

    Osuolale, Funmilayo N.; Zhang, Jie

    2016-01-01

    This paper presents a neural network based strategy for the modelling and optimisation of energy efficiency in distillation columns incorporating the second law of thermodynamics. Real-time optimisation of distillation columns based on mechanistic models is often infeasible due to the effort in model development and the large computation effort associated with mechanistic model computation. This issue can be addressed by using neural network models which can be quickly developed from process operation data. The computation time in neural network model evaluation is very short making them ideal for real-time optimisation. Bootstrap aggregated neural networks are used in this study for enhanced model accuracy and reliability. Aspen HYSYS is used for the simulation of the distillation systems. Neural network models for exergy efficiency and product compositions are developed from simulated process operation data and are used to maximise exergy efficiency while satisfying products qualities constraints. Applications to binary systems of methanol-water and benzene-toluene separations culminate in a reduction of utility consumption of 8.2% and 28.2% respectively. Application to multi-component separation columns also demonstrate the effectiveness of the proposed method with a 32.4% improvement in the exergy efficiency. - Highlights: • Neural networks can accurately model exergy efficiency in distillation columns. • Bootstrap aggregated neural network offers improved model prediction accuracy. • Improved exergy efficiency is obtained through model based optimisation. • Reductions of utility consumption by 8.2% and 28.2% were achieved for binary systems. • The exergy efficiency for multi-component distillation is increased by 32.4%.

  9. Research on CO2 ejector component efficiencies by experiment measurement and distributed-parameter modeling

    International Nuclear Information System (INIS)

    Zheng, Lixing; Deng, Jianqiang

    2017-01-01

    Highlights: • The ejector distributed-parameter model is developed to study ejector efficiencies. • Feasible component and total efficiency correlations of ejector are established. • New efficiency correlations are applied to obtain dynamic characteristics of EERC. • More suitable fixed efficiency value can be determined by the proposed correlations. - Abstract: In this study we combine the experimental measurement data and the theoretical model of ejector to determine CO 2 ejector component efficiencies including the motive nozzle, suction chamber, mixing section, diffuser as well as the total ejector efficiency. The ejector is modeled utilizing the distributed-parameter method, and the flow passage is divided into a number of elements and the governing equations are formulated based on the differential equation of mass, momentum and energy conservation. The efficiencies of ejector are investigated under different ejector geometric parameters and operational conditions, and the corresponding empirical correlations are established. Moreover, the correlations are incorporated into a transient model of transcritical CO 2 ejector expansion refrigeration cycle (EERC) and the dynamic simulations is performed based on variable component efficiencies and fixed values. The motive nozzle, suction chamber, mixing section and diffuser efficiencies vary from 0.74 to 0.89, 0.86 to 0.96, 0.73 to 0.9 and 0.75 to 0.95 under the studied conditions, respectively. The response diversities of suction flow pressure and discharge pressure are obvious between the variable efficiencies and fixed efficiencies referring to the previous studies, while when the fixed value is determined by the presented correlations, their response differences are basically the same.

  10. Model Orlando regionally efficient travel management coordination center (MORE TMCC), phase II : final report.

    Science.gov (United States)

    2012-09-01

    The final report for the Model Orlando Regionally Efficient Travel Management Coordination Center (MORE TMCC) presents the details of : the 2-year process of the partial deployment of the original MORE TMCC design created in Phase I of this project...

  11. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad

    2016-09-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method to replace correlations and equations of state in subsurface flow simulators. In order to accelerate MC simulations, a set of early rejection schemes (conservative, hybrid, and non-conservative) in addition to extrapolation methods through reweighting and reconstruction of pre-generated MC Markov chains were developed. Furthermore, an extensive study was conducted to investigate sorption and transport processes of methane, carbon dioxide, water, and their mixtures in the inorganic part of shale using both MC and MD simulations. These simulations covered a wide range of thermodynamic conditions, pore sizes, and fluid compositions shedding light on several interesting findings. For example, the possibility to have more carbon dioxide adsorbed with more preadsorbed water concentrations at relatively large basal spaces. The dissertation is divided into four chapters. The first chapter corresponds to the introductory part where a brief background about molecular simulation and motivations are given. The second chapter is devoted to discuss the theoretical aspects and methodology of the proposed MC speeding up techniques in addition to the corresponding results leading to the successful multi-scale simulation of the compressible single-phase flow scenario. In chapter 3, the results regarding our extensive study on shale gas at laboratory conditions are reported. At the fourth and last chapter, we end the dissertation with few concluding remarks highlighting the key findings and summarizing the future directions.

  12. A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle

    OpenAIRE

    Lin, Cheng; Cheng, Xingqun

    2014-01-01

    Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the tw...

  13. Four shells atomic model to computer the counting efficiency of electron-capture nuclides

    International Nuclear Information System (INIS)

    Grau Malonda, A.; Fernandez Martinez, A.

    1985-01-01

    The present paper develops a four-shells atomic model in order to obtain the efficiency of detection in liquid scintillation courting, Mathematical expressions are given to calculate the probabilities of the 229 different atomic rearrangements so as the corresponding effective energies. This new model will permit the study of the influence of the different parameters upon the counting efficiency for nuclides of high atomic number. (Author) 7 refs

  14. Numerical modeling of positive streamer in air in nonuniform fields: Efficiency of radicals production

    International Nuclear Information System (INIS)

    Kulikovsky, A.A.

    2001-01-01

    The efficiency of streamer corona depends on a number of factors such as geometry of electrodes, voltage pulse parameters, gas pressure etc. In a past 5 years a two-dimensional models of streamer in nonuniform fields in air have been developed. These models allow to simulate streamer dynamics and generation of species and to investigate the influence of external parameters on species production. In this work the influence of Laplacian field on efficiency of radicals generation is investigated

  15. Efficient finite element modelling for the investigation of the dynamic behaviour of a structure with bolted joints

    Science.gov (United States)

    Omar, R.; Rani, M. N. Abdul; Yunus, M. A.; Mirza, W. I. I. Wan Iskandar; Zin, M. S. Mohd

    2018-04-01

    A simple structure with bolted joints consists of the structural components, bolts and nuts. There are several methods to model the structures with bolted joints, however there is no reliable, efficient and economic modelling methods that can accurately predict its dynamics behaviour. Explained in this paper is an investigation that was conducted to obtain an appropriate modelling method for bolted joints. This was carried out by evaluating four different finite element (FE) models of the assembled plates and bolts namely the solid plates-bolts model, plates without bolt model, hybrid plates-bolts model and simplified plates-bolts model. FE modal analysis was conducted for all four initial FE models of the bolted joints. Results of the FE modal analysis were compared with the experimental modal analysis (EMA) results. EMA was performed to extract the natural frequencies and mode shapes of the test physical structure with bolted joints. Evaluation was made by comparing the number of nodes, number of elements, elapsed computer processing unit (CPU) time, and the total percentage of errors of each initial FE model when compared with EMA result. The evaluation showed that the simplified plates-bolts model could most accurately predict the dynamic behaviour of the structure with bolted joints. This study proved that the reliable, efficient and economic modelling of bolted joints, mainly the representation of the bolting, has played a crucial element in ensuring the accuracy of the dynamic behaviour prediction.

  16. AN INTEGRATED MODELING FRAMEWORK FOR ENVIRONMENTALLY EFFICIENT CAR OWNERSHIP AND TRIP BALANCE

    Directory of Open Access Journals (Sweden)

    Tao FENG

    2008-01-01

    Full Text Available Urban transport emissions generated by automobile trips are greatly responsible for atmospheric pollution in both developed and developing countries. To match the long-term target of sustainable development, it seems to be important to specify the feasible level of car ownership and travel demand from environmental considerations. This research intends to propose an integrated modeling framework for optimal construction of a comprehensive transportation system by taking into consideration environmental constraints. The modeling system is actually a combination of multiple essential models and illustrated by using a bi-level programming approach. In the upper level, the maximization of both total car ownership and total number of trips by private and public travel modes is set as the objective function and as the constraints, the total emission levels at all the zones are set to not exceed the relating environmental capacities. Maximizing the total trips by private and public travel modes allows policy makers to take into account trip balance to meet both the mobility levels required by travelers and the environmentally friendly transportation system goals. The lower level problem is a combined trip distribution and assignment model incorporating traveler's route choice behavior. A logit-type aggregate modal split model is established to connect the two level problems. In terms of the solution method for the integrated model, a genetic algorithm is applied. A case study is conducted using road network data and person-trip (PT data collected in Dalian city, China. The analysis results showed that the amount of environmentally efficient car ownership and number of trips by different travel modes could be obtained simultaneously when considering the zonal control of environmental capacity within the framework of the proposed integrated model. The observed car ownership in zones could be increased or decreased towards the macroscopic optimization

  17. Analysis of the coupling efficiency of a tapered space receiver with a calculus mathematical model

    Science.gov (United States)

    Hu, Qinggui; Mu, Yining

    2018-03-01

    We establish a calculus mathematical model to study the coupling characteristics of tapered optical fibers in a space communications system, and obtained the coupling efficiency equation. Then, using MATLAB software, the solution was calculated. After this, the sample was produced by the mature flame-brush technique. The experiment was then performed, and the results were in accordance with the theoretical analysis. This shows that the theoretical analysis was correct and indicates that a tapered structure could improve its tolerance with misalignment. Project supported by The National Natural Science Foundation of China (grant no. 61275080); 2017 Jilin Province Science and Technology Development Plan-Science and Technology Innovation Fund for Small and Medium Enterprises (20170308029HJ); ‘thirteen five’ science and technology research project of the Department of Education of Jilin 2016 (16JK009).

  18. A Fuzzy Logic Model to Classify Design Efficiency of Nursing Unit Floors

    Directory of Open Access Journals (Sweden)

    Tuğçe KAZANASMAZ

    2010-01-01

    Full Text Available This study was conducted to determine classifications for the planimetric design efficiency of certain public hospitals by developing a fuzzy logic algorithm. Utilizing primary areas and circulation areas from nursing unit floor plans, the study employed triangular membership functions for the fuzzy subsets. The input variables of primary areas per bed and circulation areas per bed were fuzzified in this model. The relationship between input variables and output variable of design efficiency were displayed as a result of fuzzy rules. To test existing nursing unit floors, efficiency output values were obtained and efficiency classes were constructed by this model in accordance with general norms, guidelines and previous studies. The classification of efficiency resulted from the comparison of hospitals.

  19. A method to identify energy efficiency measures for factory systems based on qualitative modeling

    CERN Document Server

    Krones, Manuela

    2017-01-01

    Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...

  20. The evaluation model of the enterprise energy efficiency based on DPSR.

    Science.gov (United States)

    Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan

    2017-05-08

    The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.

  1. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    Science.gov (United States)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is

  2. Improving actuation efficiency through variable recruitment hydraulic McKibben muscles: modeling, orderly recruitment control, and experiments.

    Science.gov (United States)

    Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim

    2016-11-03

    Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.

  3. Guidelines for developing efficient thermal conduction and storage models within building energy simulations

    International Nuclear Information System (INIS)

    Hillary, Jason; Walsh, Ed; Shah, Amip; Zhou, Rongliang; Walsh, Pat

    2017-01-01

    Improving building energy efficiency is of paramount importance due to the large proportion of energy consumed by thermal operations. Consequently, simulating a building's environment has gained popularity for assessing thermal comfort and design. The extended timeframes and large physical scales involved necessitate compact modelling approaches. The accuracy of such simulations is of chief concern, yet there is little guidance offered on achieving accurate solutions whilst mitigating prohibitive computational costs. Therefore, the present study addresses this deficit by providing clear guidance on discretisation levels required for achieving accurate but computationally inexpensive models. This is achieved by comparing numerical models of varying discretisation levels to benchmark analytical solutions with prediction accuracy assessed and reported in terms of governing dimensionless parameters, Biot and Fourier numbers, to ensure generality of findings. Furthermore, spatial and temporal discretisation errors are separated and assessed independently. Contour plots are presented to intuitively determine the optimal discretisation levels and time-steps required to achieve accurate thermal response predictions. Simulations derived from these contour plots were tested against various building conditions with excellent agreement observed throughout. Additionally, various scenarios are highlighted where the classical single lumped capacitance model can be applied for Biot numbers much greater than 0.1 without reducing accuracy. - Highlights: • Addressing the problems of inadequate discretisation within building energy models. • Accuracy of numerical models assessed against analytical solutions. • Fourier and Biot numbers used to provide generality of results for any material. • Contour plots offer intuitive way to interpret results for manual discretisation. • Results show proposed technique promising for automation of discretisation process.

  4. Higher-fidelity yet efficient modeling of radiation energy transport through three-dimensional clouds

    International Nuclear Information System (INIS)

    Hall, M.L.; Davis, A.B.

    2005-01-01

    Accurate modeling of radiative energy transport through cloudy atmospheres is necessary for both climate modeling with GCMs (Global Climate Models) and remote sensing. Previous modeling efforts have taken advantage of extreme aspect ratios (cells that are very wide horizontally) by assuming a 1-D treatment vertically - the Independent Column Approximation (ICA). Recent attempts to resolve radiation transport through the clouds have drastically changed the aspect ratios of the cells, moving them closer to unity, such that the ICA model is no longer valid. We aim to provide a higher-fidelity atmospheric radiation transport model which increases accuracy while maintaining efficiency. To that end, this paper describes the development of an efficient 3-D-capable radiation code that can be easily integrated into cloud resolving models as an alternative to the resident 1-D model. Applications to test cases from the Intercomparison of 3-D Radiation Codes (I3RC) protocol are shown

  5. Practical Validation of Economic Efficiency Modelling Method for Multi-Boiler Heating System

    Directory of Open Access Journals (Sweden)

    Aleksejs Jurenoks

    2017-12-01

    Full Text Available In up-to-date conditions information technology is frequently associated with the modelling process, using computer technology as well as information networks. Statistical modelling is one of the most widespread methods of research of economic systems. The selection of methods of modelling of the economic systems depends on a great number of conditions of the researched system. Modelling is frequently associated with the factor of uncertainty (or risk, who’s description goes outside the confines of the traditional statistical modelling, which, in its turn, complicates the modelling which, in its turn, complicates the modelling process. This article describes the modelling process of assessing the economic efficiency of a multi-boiler adaptive heating system in real-time systems which allows for dynamic change in the operation scenarios of system service installations while enhancing the economic efficiency of the system in consideration.

  6. Efficient modeling of interconnects and capacitive discontinuities in high-speed digital circuits. Thesis

    Science.gov (United States)

    Oh, K. S.; Schutt-Aine, J.

    1995-01-01

    Modeling of interconnects and associated discontinuities with the recent advances high-speed digital circuits has gained a considerable interest over the last decade although the theoretical bases for analyzing these structures were well-established as early as the 1960s. Ongoing research at the present time is focused on devising methods which can be applied to more general geometries than the ones considered in earlier days and, at the same time, improving the computational efficiency and accuracy of these methods. In this thesis, numerically efficient methods to compute the transmission line parameters of a multiconductor system and the equivalent capacitances of various strip discontinuities are presented based on the quasi-static approximation. The presented techniques are applicable to conductors embedded in an arbitrary number of dielectric layers with two possible locations of ground planes at the top and bottom of the dielectric layers. The cross-sections of conductors can be arbitrary as long as they can be described with polygons. An integral equation approach in conjunction with the collocation method is used in the presented methods. A closed-form Green's function is derived based on weighted real images thus avoiding nested infinite summations in the exact Green's function; therefore, this closed-form Green's function is numerically more efficient than the exact Green's function. All elements associated with the moment matrix are computed using the closed-form formulas. Various numerical examples are considered to verify the presented methods, and a comparison of the computed results with other published results showed good agreement.

  7. How efficiently do corn- and soybean-based cropping systems use water? A systems modeling analysis.

    Science.gov (United States)

    Dietzel, Ranae; Liebman, Matt; Ewing, Robert; Helmers, Matt; Horton, Robert; Jarchow, Meghann; Archontoulis, Sotirios

    2016-02-01

    Agricultural systems are being challenged to decrease water use and increase production while climate becomes more variable and the world's population grows. Low water use efficiency is traditionally characterized by high water use relative to low grain production and usually occurs under dry conditions. However, when a cropping system fails to take advantage of available water during wet conditions, this is also an inefficiency and is often detrimental to the environment. Here, we provide a systems-level definition of water use efficiency (sWUE) that addresses both production and environmental quality goals through incorporating all major system water losses (evapotranspiration, drainage, and runoff). We extensively calibrated and tested the Agricultural Production Systems sIMulator (APSIM) using 6 years of continuous crop and soil measurements in corn- and soybean-based cropping systems in central Iowa, USA. We then used the model to determine water use, loss, and grain production in each system and calculated sWUE in years that experienced drought, flood, or historically average precipitation. Systems water use efficiency was found to be greatest during years with average precipitation. Simulation analysis using 28 years of historical precipitation data, plus the same dataset with ± 15% variation in daily precipitation, showed that in this region, 430 mm of seasonal (planting to harvesting) rainfall resulted in the optimum sWUE for corn, and 317 mm for soybean. Above these precipitation levels, the corn and soybean yields did not increase further, but the water loss from the system via runoff and drainage increased substantially, leading to a high likelihood of soil, nutrient, and pesticide movement from the field to waterways. As the Midwestern United States is predicted to experience more frequent drought and flood, inefficiency of cropping systems water use will also increase. This work provides a framework to concurrently evaluate production and

  8. Marker encoded fringe projection profilometry for efficient 3D model acquisition.

    Science.gov (United States)

    Budianto, B; Lun, P K D; Hsung, Tai-Chiu

    2014-11-01

    This paper presents a novel marker encoded fringe projection profilometry (FPP) scheme for efficient 3-dimensional (3D) model acquisition. Traditional FPP schemes can introduce large errors to the reconstructed 3D model when the target object has an abruptly changing height profile. For the proposed scheme, markers are encoded in the projected fringe pattern to resolve the ambiguities in the fringe images due to that problem. Using the analytic complex wavelet transform, the marker cue information can be extracted from the fringe image, and is used to restore the order of the fringes. A series of simulations and experiments have been carried out to verify the proposed scheme. They show that the proposed method can greatly improve the accuracy over the traditional FPP schemes when reconstructing the 3D model of objects with abruptly changing height profile. Since the scheme works directly in our recently proposed complex wavelet FPP framework, it enjoys the same properties that it can be used in real time applications for color objects.

  9. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  10. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  11. A mathematical model of capacious and efficient memory that survives trauma

    Science.gov (United States)

    Srivastava, Vipin; Edwards, S. F.

    2004-02-01

    The brain's memory system can store without any apparent constraint, it recalls stored information efficiently and it is robust against lesion. Existing models of memory do not fully account for all these features. The model due to Hopfield (Proc. Natl. Acad. Sci. USA 79 (1982) 2554) based on Hebbian learning (The Organization of Behaviour, Wiley, New York, 1949) shows an early saturation of memory with the retrieval from memory becoming slow and unreliable before collapsing at this limit. Our hypothesis (Physica A 276 (2000) 352) that the brain might store orthogonalized information improved the situation in many ways but was still constrained in that the information to be stored had to be linearly independent, i.e., signals that could be expressed as linear combinations of others had to be excluded. Here we present a model that attempts to address the problem quite comprehensively in the background of the above attributes of the brain. We demonstrate that if the brain devolves incoming signals in analogy with Fourier analysis, the noise created by interference of stored signals diminishes systematically (which yields prompt retrieval) and most importantly it can withstand partial damages to the brain.

  12. Energy efficiency and use of natural gas show a changed view of energy consumption in the USA in 2011; Energie efficientie en gasgebruik leidden tot een veranderd beeld van energieverbuik in de VS in 2011

    Energy Technology Data Exchange (ETDEWEB)

    Louzada, K.

    2012-11-15

    The Lawrence Livermore National Laboratory (LLNL) recently published estimates of energy consumption in the USA. The total energy consumption in the USA decreased by almost 2% since 2008. This decrease is attributed to increasing energy efficiency in households and transport. The increase in renewable energy shows how the current policy has promoted the growth of these technologies. In addition, the use of natural gas increased significantly as a result of shale gas extraction and reduced use of coal [Dutch] Het Lawrence Livermore National Laboratory (LLNL) publiceerde recentelijk schattingen van het energieverbruik in de VS. Het totale energieverbruik in de VS is met bijna 2% gedaald sinds 2008. Deze daling wordt toegeschreven aan toenemende energie-efficientie in huishoudens en transport. De toename van duurzame energie geeft weer hoe het huidige beleid de groei van deze technologieen heeft bevorderd. Daarnaast is door de opkomst van schaliegas winning het gebruik van aardgas aanzienlijk toegenomen en het gebruik van steenkool afgenomen.

  13. Show and Tell: Video Modeling and Instruction Without Feedback Improves Performance but Is Not Sufficient for Retention of a Complex Voice Motor Skill.

    Science.gov (United States)

    Look, Clarisse; McCabe, Patricia; Heard, Robert; Madill, Catherine J

    2018-02-02

    Modeling and instruction are frequent components of both traditional and technology-assisted voice therapy. This study investigated the value of video modeling and instruction in the early acquisition and short-term retention of a complex voice task without external feedback. Thirty participants were randomized to two conditions and trained to produce a vocal siren over 40 trials. One group received a model and verbal instructions, the other group received a model only. Sirens were analyzed for phonation time, vocal intensity, cepstral peak prominence, peak-to-peak time, and root-mean-square error at five time points. The model and instruction group showed significant improvement on more outcome measures than the model-only group. There was an interaction effect for vocal intensity, which showed that instructions facilitated greater improvement when they were first introduced. However, neither group reproduced the model's siren performance across all parameters or retained the skill 1 day later. Providing verbal instruction with a model appears more beneficial than providing a model only in the prepractice phase of acquiring a complex voice skill. Improved performance was observed; however, the higher level of performance was not retained after 40 trials in both conditions. Other prepractice variables may need to be considered. Findings have implications for traditional and technology-assisted voice therapy. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. Rapid Optimization of External Quantum Efficiency of Thin Film Solar Cells Using Surrogate Modeling of Absorptivity.

    Science.gov (United States)

    Kaya, Mine; Hajimirza, Shima

    2018-05-25

    This paper uses surrogate modeling for very fast design of thin film solar cells with improved solar-to-electricity conversion efficiency. We demonstrate that the wavelength-specific optical absorptivity of a thin film multi-layered amorphous-silicon-based solar cell can be modeled accurately with Neural Networks and can be efficiently approximated as a function of cell geometry and wavelength. Consequently, the external quantum efficiency can be computed by averaging surrogate absorption and carrier recombination contributions over the entire irradiance spectrum in an efficient way. Using this framework, we optimize a multi-layer structure consisting of ITO front coating, metallic back-reflector and oxide layers for achieving maximum efficiency. Our required computation time for an entire model fitting and optimization is 5 to 20 times less than the best previous optimization results based on direct Finite Difference Time Domain (FDTD) simulations, therefore proving the value of surrogate modeling. The resulting optimization solution suggests at least 50% improvement in the external quantum efficiency compared to bare silicon, and 25% improvement compared to a random design.

  15. Is the Langevin phase equation an efficient model for oscillating neurons?

    Science.gov (United States)

    Ota, Keisuke; Tsunoda, Takamasa; Omori, Toshiaki; Watanabe, Shigeo; Miyakawa, Hiroyoshi; Okada, Masato; Aonishi, Toru

    2009-12-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  16. Is the Langevin phase equation an efficient model for oscillating neurons?

    International Nuclear Information System (INIS)

    Ota, Keisuke; Tsunoda, Takamasa; Aonishi, Toru; Omori, Toshiaki; Okada, Masato; Watanabe, Shigeo; Miyakawa, Hiroyoshi

    2009-01-01

    The Langevin phase model is an important canonical model for capturing coherent oscillations of neural populations. However, little attention has been given to verifying its applicability. In this paper, we demonstrate that the Langevin phase equation is an efficient model for neural oscillators by using the machine learning method in two steps: (a) Learning of the Langevin phase model. We estimated the parameters of the Langevin phase equation, i.e., a phase response curve and the intensity of white noise from physiological data measured in the hippocampal CA1 pyramidal neurons. (b) Test of the estimated model. We verified whether a Fokker-Planck equation derived from the Langevin phase equation with the estimated parameters could capture the stochastic oscillatory behavior of the same neurons disturbed by periodic perturbations. The estimated model could predict the neural behavior, so we can say that the Langevin phase equation is an efficient model for oscillating neurons.

  17. A Bioeconomic Foundation for the Nutrition-based Efficiency Wage Model

    DEFF Research Database (Denmark)

    Dalgaard, Carl-Johan Lars; Strulik, Holger

    . By extending the model with respect to heterogeneity in worker body size and a physiologically founded impact of body size on productivity, we demonstrate that the nutrition-based efficiency wage model is compatible with the empirical regularity that taller workers simultaneously earn higher wages and are less...

  18. Efficient predictive model-based and fuzzy control for green urban mobility

    NARCIS (Netherlands)

    Jamshidnejad, A.

    2017-01-01

    In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the

  19. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  20. Evaluating the Efficiency of a Multi-core Aware Multi-objective Optimization Tool for Calibrating the SWAT Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Izaurralde, R. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zong, Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhao, K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thomson, A. M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2012-08-20

    The efficiency of calibrating physically-based complex hydrologic models is a major concern in the application of those models to understand and manage natural and human activities that affect watershed systems. In this study, we developed a multi-core aware multi-objective evolutionary optimization algorithm (MAMEOA) to improve the efficiency of calibrating a worldwide used watershed model (Soil and Water Assessment Tool (SWAT)). The test results show that MAMEOA can save about 1-9%, 26-51%, and 39-56% time consumed by calibrating SWAT as compared with sequential method by using dual-core, quad-core, and eight-core machines, respectively. Potential and limitations of MAMEOA for calibrating SWAT are discussed. MAMEOA is open source software.

  1. Design and modeling of an SJ infrared solar cell approaching upper limit of theoretical efficiency

    Science.gov (United States)

    Sahoo, G. S.; Mishra, G. P.

    2018-01-01

    Recent trends of photovoltaics account for the conversion efficiency limit making them more cost effective. To achieve this we have to leave the golden era of silicon cell and make a path towards III-V compound semiconductor groups to take advantages like bandgap engineering by alloying these compounds. In this work we have used a low bandgap GaSb material and designed a single junction (SJ) cell with a conversion efficiency of 32.98%. SILVACO ATLAS TCAD simulator has been used to simulate the proposed model using both Ray Tracing and Transfer Matrix Method (under 1 sun and 1000 sun of AM1.5G spectrum). A detailed analyses of photogeneration rate, spectral response, potential developed, external quantum efficiency (EQE), internal quantum efficiency (IQE), short-circuit current density (JSC), open-circuit voltage (VOC), fill factor (FF) and conversion efficiency (η) are discussed. The obtained results are compared with previously reported SJ solar cell reports.

  2. Efficient Bayesian estimates for discrimination among topologically different systems biology models.

    Science.gov (United States)

    Hagen, David R; Tidor, Bruce

    2015-02-01

    A major effort in systems biology is the development of mathematical models that describe complex biological systems at multiple scales and levels of abstraction. Determining the topology-the set of interactions-of a biological system from observations of the system's behavior is an important and difficult problem. Here we present and demonstrate new methodology for efficiently computing the probability distribution over a set of topologies based on consistency with existing measurements. Key features of the new approach include derivation in a Bayesian framework, incorporation of prior probability distributions of topologies and parameters, and use of an analytically integrable linearization based on the Fisher information matrix that is responsible for large gains in efficiency. The new method was demonstrated on a collection of four biological topologies representing a kinase and phosphatase that operate in opposition to each other with either processive or distributive kinetics, giving 8-12 parameters for each topology. The linearization produced an approximate result very rapidly (CPU minutes) that was highly accurate on its own, as compared to a Monte Carlo method guaranteed to converge to the correct answer but at greater cost (CPU weeks). The Monte Carlo method developed and applied here used the linearization method as a starting point and importance sampling to approach the Bayesian answer in acceptable time. Other inexpensive methods to estimate probabilities produced poor approximations for this system, with likelihood estimation showing its well-known bias toward topologies with more parameters and the Akaike and Schwarz Information Criteria showing a strong bias toward topologies with fewer parameters. These results suggest that this linear approximation may be an effective compromise, providing an answer whose accuracy is near the true Bayesian answer, but at a cost near the common heuristics.

  3. Improving efficiency assessments using additive data envelopment analysis models: an application to contrasting dairy farming systems

    Directory of Open Access Journals (Sweden)

    Andreas Diomedes Soteriades

    2015-10-01

    Full Text Available Applying holistic indicators to assess dairy farm efficiency is essential for sustainable milk production. Data Envelopment Analysis (DEA has been instrumental for the calculation of such indicators. However, ‘additive’ DEA models have been rarely used in dairy research. This study presented an additive model known as slacks-based measure (SBM of efficiency and its advantages over DEA models used in most past dairy studies. First, SBM incorporates undesirable outputs as actual outputs of the production process. Second, it identifies the main production factors causing inefficiency. Third, these factors can be ‘priced’ to estimate the cost of inefficiency. The value of SBM for efficiency analyses was demonstrated with a comparison of four contrasting dairy management systems in terms of technical and environmental efficiency. These systems were part of a multiple-year breeding and feeding systems experiment (two genetic lines: select vs. control; and two feeding strategies: high forage vs. low forage, where the latter involved a higher proportion of concentrated feeds where detailed data were collected to strict protocols. The select genetic herd was more technically and environmentally efficient than the control herd, regardless of feeding strategy. However, the efficiency performance of the select herd was more volatile from year to year than that of the control herd. Overall, technical and environmental efficiency were strongly and positively correlated, suggesting that when technically efficient, the four systems were also efficient in terms of undesirable output reduction. Detailed data such as those used in this study are increasingly becoming available for commercial herds through precision farming. Therefore, the methods presented in this study are growing in importance.

  4. A Sharable and Efficient Metadata Model for Heterogeneous Earth Observation Data Retrieval in Multi-Scale Flood Mapping

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2015-07-01

    Full Text Available Remote sensing plays an important role in flood mapping and is helping advance flood monitoring and management. Multi-scale flood mapping is necessary for dividing floods into several stages for comprehensive management. However, existing data systems are typically heterogeneous owing to the use of different access protocols and archiving metadata models. In this paper, we proposed a sharable and efficient metadata model (APEOPM for constructing an Earth observation (EO data system to retrieve remote sensing data for flood mapping. The proposed model contains two sub-models, an access protocol model and an enhanced encoding model. The access protocol model helps unify heterogeneous access protocols and can achieve intelligent access via a semantic enhancement method. The enhanced encoding model helps unify a heterogeneous archiving metadata model. Wuhan city, one of the most important cities in the Yangtze River Economic Belt in China, is selected as a study area for testing the retrieval of heterogeneous EO data and flood mapping. The past torrential rain period from 25 March 2015 to 10 April 2015 is chosen as the temporal range in this study. To aid in comprehensive management, mapping is conducted at different spatial and temporal scales. In addition, the efficiency of data retrieval is analyzed, and validation between the flood maps and actual precipitation was conducted. The results show that the flood map coincided with the actual precipitation.

  5. Worm gear efficiency model considering misalignment in electric power steering systems

    Directory of Open Access Journals (Sweden)

    S. H. Kim

    2018-05-01

    Full Text Available This study proposes a worm gear efficiency model considering misalignment in electric power steering systems. A worm gear is used in Column type Electric Power Steering (C-EPS systems and an Anti-Rattle Spring (ARS is employed in C-EPS systems in order to prevent rattling when the vehicle goes on a bumpy road. This ARS plays a role of preventing rattling by applying preload to one end of the worm shaft but it also generates undesirable friction by causing misalignment of the worm shaft. In order to propose the worm gear efficiency model considering misalignment, geometrical and tribological analyses were performed in this study. For geometrical analysis, normal load on gear teeth was calculated using output torque, pitch diameter of worm wheel, lead angle and normal pressure angle and this normal load was converted to normal pressure at the contact point. Contact points between the tooth flanks of the worm and worm wheel were obtained by mathematically analyzing the geometry, and Hertz's theory was employed in order to calculate contact area at the contact point. Finally, misalignment by an ARS was also considered into the geometry. Friction coefficients between the tooth flanks were also researched in this study. A pin-on-disk type tribometer was set up to measure friction coefficients and friction coefficients at all conditions were measured by the tribometer. In order to validate the worm gear efficiency model, a worm gear was prepared and the efficiency of the worm gear was predicted by the model. As the final procedure of the study, a worm gear efficiency measurement system was set and the efficiency of the worm gear was measured and the results were compared with the predicted results. The efficiency considering misalignment gives more accurate results than the efficiency without misalignment.

  6. Studies of heating efficiencies and models of RF-sheaths for the JET antennae

    International Nuclear Information System (INIS)

    Hedin, J.

    1996-02-01

    A theoretical model for the appearance of RF-sheaths is developed to see if this can explain the expected lower heating efficiencies of the new A 2 antennae at JET. The equations are solved numerically. A general method for evaluation of the experimental data of the heating efficiencies of the new antennae at JET is developed and applied for discharges with and without the bumpy limiter on the D antennae. 8 refs, 26 figs

  7. Water Use Efficiency and Its Influencing Factors in China: Based on the Data Envelopment Analysis (DEA—Tobit Model

    Directory of Open Access Journals (Sweden)

    Shuqiao Wang

    2018-06-01

    Full Text Available Water resources are important and irreplaceable natural and economic resources. Achieving a balance between economic prosperity and protection of water resource environments is a major issue in China. This article develops a data envelopment analysis (DEA approach with undesirable outputs by using Seiford’s linear converting method to estimate water use efficiencies for 30 provinces in China, from 2008–2016,and then analyzes the influencing factors while using a DEA-Tobit model. The findings show that the overall water use efficiency of the measured Chinese provinces, when considering sewage emissions as the undesirable output, is 0.582. Thus, most regions still need improvement. Provinces with the highest water efficiency are located in economically developed Eastern China. The spatial pattern of water use efficiency in China is consistent with the general pattern of regional economic development. This study implies that factors like export dependence, technical progress, and educational value have a positive influence on water use efficiency. Further, while industrial structure has had a negative impact, government intervention has had little impact on water use efficiency. These research results will provide a scientific basis for the government to make plans for water resource development, and it may be helpful in improving regional sustainable development.

  8. An analytical model for droplet separation in vane separators and measurements of grade efficiency and pressure drop

    International Nuclear Information System (INIS)

    Koopman, Hans K.; Köksoy, Çağatay; Ertunç, Özgür; Lienhart, Hermann; Hedwig, Heinz; Delgado, Antonio

    2014-01-01

    Highlights: • An analytical model for efficiency is extended with additional geometrical features. • A simplified and a novel vane separator design are investigated experimentally. • Experimental results are significantly affected by re-entrainment effects. • Outlet droplet size spectra are accurately predicted by the model. • The improved grade efficiency doubles the pressure drop. - Abstract: This study investigates the predictive power of analytical models for the droplet separation efficiency of vane separators and compares experimental results of two different vane separator geometries. The ability to predict the separation efficiency of vane separators simplifies their design process, especially when analytical research allows the identification of the most important physical and geometrical parameters and can quantify their contribution. In this paper, an extension of a classical analytical model for separation efficiency is proposed that accounts for the contributions provided by straight wall sections. The extension of the analytical model is benchmarked against experiments performed by Leber (2003) on a single stage straight vane separator. The model is in very reasonable agreement with the experimental values. Results from the analytical model are also compared with experiments performed on a vane separator of simplified geometry (VS-1). The experimental separation efficiencies, computed from the measured liquid mass balances, are significantly below the model predictions, which lie arbitrarily close to unity. This difference is attributed to re-entrainment through film detachment from the last stage of the vane separators. After adjustment for re-entrainment effects, by applying a cut-off filter to the outlet droplet size spectra, the experimental and theoretical outlet Sauter mean diameters show very good agreement. A novel vane separator geometry of patented design (VS-2) is also investigated, comparing experimental results with VS-1

  9. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.

    Science.gov (United States)

    Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M

    2012-04-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.

  10. Efficient Measurement of Shape Dissimilarity between 3D Models Using Z-Buffer and Surface Roving Method

    Directory of Open Access Journals (Sweden)

    In Kyu Park

    2002-10-01

    Full Text Available Estimation of the shape dissimilarity between 3D models is a very important problem in both computer vision and graphics for 3D surface reconstruction, modeling, matching, and compression. In this paper, we propose a novel method called surface roving technique to estimate the shape dissimilarity between 3D models. Unlike conventional methods, our surface roving approach exploits a virtual camera and Z-buffer, which is commonly used in 3D graphics. The corresponding points on different 3D models can be easily identified, and also the distance between them is determined efficiently, regardless of the representation types of the 3D models. Moreover, by employing the viewpoint sampling technique, the overall computation can be greatly reduced so that the dissimilarity is obtained rapidly without loss of accuracy. Experimental results show that the proposed algorithm achieves fast and accurate measurement of shape dissimilarity for different types of 3D object models.

  11. Evaluating Technical Efficiency of Nursing Care Using Data Envelopment Analysis and Multilevel Modeling.

    Science.gov (United States)

    Min, Ari; Park, Chang Gi; Scott, Linda D

    2016-05-23

    Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states. © The Author(s) 2016.

  12. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  13. A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.

  14. Ultrasound elastography: efficient estimation of tissue displacement using an affine transformation model

    Science.gov (United States)

    Hashemi, Hoda Sadat; Boily, Mathieu; Martineau, Paul A.; Rivaz, Hassan

    2017-03-01

    Ultrasound elastography entails imaging mechanical properties of tissue and is therefore of significant clinical importance. In elastography, two frames of radio-frequency (RF) ultrasound data that are obtained while the tissue is undergoing deformation, and the time-delay estimate (TDE) between the two frames is used to infer mechanical properties of tissue. TDE is a critical step in elastography, and is challenging due to noise and signal decorrelation. This paper presents a novel and robust technique TDE using all samples of RF data simultaneously. We assume tissue deformation can be approximated by an affine transformation, and hence call our method ATME (Affine Transformation Model Elastography). The affine transformation model is utilized to obtain initial estimates of axial and lateral displacement fields. The affine transformation only has six degrees of freedom (DOF), and as such, can be efficiently estimated. A nonlinear cost function that incorporates similarity of RF data intensity and prior information of displacement continuity is formulated to fine-tune the initial affine deformation field. Optimization of this function involves searching for TDE of all samples of the RF data. The optimization problem is converted to a sparse linear system of equations, which can be solved in real-time. Results on simulation are presented for validation. We further collect RF data from in-vivo patellar tendon and medial collateral ligament (MCL), and show that ATME can be used to accurately track tissue displacement.

  15. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  16. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  17. Energy efficiency in the industrial sector. Model based analysis of the efficient use of energy in the EU-27 with focus on the industrial sector

    International Nuclear Information System (INIS)

    Kuder, Ralf

    2014-01-01

    of the industry could be split up into energy intensive subsectors where single production processes dominate the energy consumption, and non-energy intensive subsectors. Ways to reduce the energy consumption in the industrial sector are the use of alternative or improved production or cross cutting technologies and the use of energy saving measures to reduce the demand for useable energy. Based on the analysis within this study, 21 % of the current energy consumption of the industrial sector of the EU and 17 % in Germany could be reduced. Based on the extended understanding of energy efficiency, the model based scenario analysis of the European energy system with the further developed energy system model TIMES PanEU shows that the efficient use of energy at an emission reduction level of 75 % is a slightly increasing primary energy consumption. The primary energy consumption is characterised by a diversified energy carrier and technology mix. Renewable energy sources, nuclear energy and CCS play a key role in the long term. In addition the electricity demand in combination with a strong decarbonisation of the electricity generation is increasing constantly. In the industrial sector the emission reduction is driven by the extended use of electricity, CCS and renewables as well as by the use of improved or alternative process and supply technologies with lower specific energy consumption. Thereby the final energy consumption stays almost on a constant level with increasing importance of electricity and biomass. Both regulatory interventions in the electricity sector and energy saving targets on the primary energy demand lead to higher energy system costs and therewith to a decrease of efficiency based on the extended understanding. The energy demand is reduced stronger than it is efficient and the saving targets lead to the extended use of other resources resulting in totally higher costs. The integrated system analysis in this study points out the interactions

  18. Charge collection efficiency degradation induced by MeV ions in semiconductor devices: Model and experiment

    Energy Technology Data Exchange (ETDEWEB)

    Vittone, E., E-mail: ettore.vittone@unito.it [Department of Physics, NIS Research Centre and CNISM, University of Torino, via P. Giuria 1, 10125 Torino (Italy); Pastuovic, Z. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Breese, M.B.H. [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, Singapore 117542 (Singapore); Garcia Lopez, J. [Centro Nacional de Aceleradores (CNA), Sevilla University, J. Andalucia, CSIC, Av. Thomas A. Edison 7, 41092 Sevilla (Spain); Jaksic, M. [Department for Experimental Physics, Ruder Boškovic Institute (RBI), P.O. Box 180, 10002 Zagreb (Croatia); Raisanen, J. [Department of Physics, University of Helsinki, Helsinki 00014 (Finland); Siegele, R. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Simon, A. [International Atomic Energy Agency (IAEA), Vienna International Centre, P.O. Box 100, 1400 Vienna (Austria); Institute of Nuclear Research of the Hungarian Academy of Sciences (ATOMKI), Debrecen (Hungary); Vizkelethy, G. [Sandia National Laboratories (SNL), PO Box 5800, Albuquerque, NM (United States)

    2016-04-01

    Highlights: • We study the electronic degradation of semiconductors induced by ion irradiation. • The experimental protocol is based on MeV ion microbeam irradiation. • The radiation induced damage is measured by IBIC. • The general model fits the experimental data in the low level damage regime. • Key parameters relevant to the intrinsic radiation hardness are extracted. - Abstract: This paper investigates both theoretically and experimentally the charge collection efficiency (CCE) degradation in silicon diodes induced by energetic ions. Ion Beam Induced Charge (IBIC) measurements carried out on n- and p-type silicon diodes which were previously irradiated with MeV He ions show evidence that the CCE degradation does not only depend on the mass, energy and fluence of the damaging ion, but also depends on the ion probe species and on the polarization state of the device. A general one-dimensional model is derived, which accounts for the ion-induced defect distribution, the ionization profile of the probing ion and the charge induction mechanism. Using the ionizing and non-ionizing energy loss profiles resulting from simulations based on the binary collision approximation and on the electrostatic/transport parameters of the diode under study as input, the model is able to accurately reproduce the experimental CCE degradation curves without introducing any phenomenological additional term or formula. Although limited to low level of damage, the model is quite general, including the displacement damage approach as a special case and can be applied to any semiconductor device. It provides a method to measure the capture coefficients of the radiation induced recombination centres. They can be considered indexes, which can contribute to assessing the relative radiation hardness of semiconductor materials.

  19. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    International Nuclear Information System (INIS)

    Jošt, D; Škerlavaj, A; Lipej, A

    2012-01-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  20. Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models

    Science.gov (United States)

    Jošt, D.; Škerlavaj, A.; Lipej, A.

    2012-11-01

    Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.

  1. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  2. Advanced interface modelling of n-Si/HNO3 doped graphene solar cells to identify pathways to high efficiency

    Science.gov (United States)

    Zhao, Jing; Ma, Fa-Jun; Ding, Ke; Zhang, Hao; Jie, Jiansheng; Ho-Baillie, Anita; Bremner, Stephen P.

    2018-03-01

    In graphene/silicon solar cells, it is crucial to understand the transport mechanism of the graphene/silicon interface to further improve power conversion efficiency. Until now, the transport mechanism has been predominantly simplified as an ideal Schottky junction. However, such an ideal Schottky contact is never realised experimentally. According to literature, doped graphene shows the properties of a semiconductor, therefore, it is physically more accurate to model graphene/silicon junction as a Heterojunction. In this work, HNO3-doped graphene/silicon solar cells were fabricated with the power conversion efficiency of 9.45%. Extensive characterization and first-principles calculations were carried out to establish an advanced technology computer-aided design (TCAD) model, where p-doped graphene forms a straddling heterojunction with the n-type silicon. In comparison with the simple Schottky junction models, our TCAD model paves the way for thorough investigation on the sensitivity of solar cell performance to graphene properties like electron affinity. According to the TCAD heterojunction model, the cell performance can be improved up to 22.5% after optimizations of the antireflection coatings and the rear structure, highlighting the great potentials for fabricating high efficiency graphene/silicon solar cells and other optoelectronic devices.

  3. Empirical Study on Total Factor Productive Energy Efficiency in Beijing-Tianjin-Hebei Region-Analysis based on Malmquist Index and Window Model

    Science.gov (United States)

    Xu, Qiang; Ding, Shuai; An, Jingwen

    2017-12-01

    This paper studies the energy efficiency of Beijing-Tianjin-Hebei region and to finds out the trend of energy efficiency in order to improve the economic development quality of Beijing-Tianjin-Hebei region. Based on Malmquist index and window analysis model, this paper estimates the total factor energy efficiency in Beijing-Tianjin-Hebei region empirically by using panel data in this region from 1991 to 2014, and provides the corresponding political recommendations. The empirical result shows that, the total factor energy efficiency in Beijing-Tianjin-Hebei region increased from 1991 to 2014, mainly relies on advances in energy technology or innovation, and obvious regional differences in energy efficiency to exist. Throughout the window period of 24 years, the regional differences of energy efficiency in Beijing-Tianjin-Hebei region shrank. There has been significant convergent trend in energy efficiency after 2000, mainly depends on the diffusion and spillover of energy technologies.

  4. Analysis of Low-Carbon Economy Efficiency of Chinese Industrial Sectors Based on a RAM Model with Undesirable Outputs

    Directory of Open Access Journals (Sweden)

    Ming Meng

    2017-03-01

    Full Text Available Industrial energy and environment efficiency evaluation become especially crucial as industrial sectors play a key role in CO2 emission reduction and energy consumption. This study adopts the additive range-adjusted measure data envelope analysis (RAM-DEA model to estimate the low-carbon economy efficiency of Chinese industrial sectors in 2001–2013. In addition, the CO2 emission intensity mitigation target for each industrial sector is assigned. Results show that, first, most sectors are not completely efficient, but they have experienced and have improved greatly during the period. These sectors can be divided into four categories, namely, mining, light, heavy, and electricity, gas, and water supply industries. The efficiency is diverse among the four industrial categories. The average efficiency of the light industry is the highest among the industries, followed by those of the mining and the electricity, gas, and water supply industries, and that of the heavy industry is the lowest. Second, the electricity, gas, and water supply industry shows the biggest potential for CO2 emission reduction, thus containing most of the sectors with large CO2 emission intensity mitigation targets (more than 45%, followed by the mining and the light industries. Therefore, the Chinese government should formulate diverse and flexible policy implementations according to the actual situation of the different sectors. Specifically, the sectors with low efficiency should be provided with additional policy support (such as technology and finance aids to improve their industrial efficiency, whereas the electricity, gas, and water supply industry should maximize CO2 emission reduction.

  5. ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation

    Directory of Open Access Journals (Sweden)

    Behzad Fakari Sardehae

    2016-09-01

    . This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not

  6. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  7. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  8. Energy and environment efficiency analysis based on an improved environment DEA cross-model: Case study of complex chemical processes

    International Nuclear Information System (INIS)

    Geng, ZhiQiang; Dong, JunGen; Han, YongMing; Zhu, QunXiong

    2017-01-01

    Highlights: •An improved environment DEA cross-model method is proposed. •Energy and environment efficiency analysis framework of complex chemical processes is obtained. •This proposed method is efficient in energy-saving and emission reduction of complex chemical processes. -- Abstract: The complex chemical process is a high pollution and high energy consumption industrial process. Therefore, it is very important to analyze and evaluate the energy and environment efficiency of the complex chemical process. Data Envelopment Analysis (DEA) is used to evaluate the relative effectiveness of decision-making units (DMUs). However, the traditional DEA method usually cannot genuinely distinguish the effective and inefficient DMU due to its extreme or unreasonable weight distribution of input and output variables. Therefore, this paper proposes an energy and environment efficiency analysis method based on an improved environment DEA cross-model (DEACM) method. The inputs of the complex chemical process are divided into energy and non-energy inputs. Meanwhile, the outputs are divided into desirable and undesirable outputs. And then the energy and environment performance index (EEPI) based on the cross evaluation is used to represent the overall performance of each DMU. Moreover, the improvement direction of energy-saving and carbon emission reduction of each inefficiency DMU is quantitatively obtained based on the self-evaluation model of the improved environment DEACM. The results show that the improved environment DEACM method has a better effective discrimination than the original DEA method by analyzing the energy and environment efficiency of the ethylene production process in complex chemical processes, and it can obtain the potential of energy-saving and carbon emission reduction of ethylene plants, especially the improvement direction of inefficient DMUs to improve energy efficiency and reduce carbon emission.

  9. Generalized Efficient Inference on Factor Models with Long-Range Dependence

    DEFF Research Database (Denmark)

    Ergemen, Yunus Emre

    . Short-memory dynamics are allowed in the common factor structure and possibly heteroskedastic error term. In the estimation, a generalized version of the principal components (PC) approach is proposed to achieve efficiency. Asymptotics for efficient common factor and factor loading as well as long......A dynamic factor model is considered that contains stochastic time trends allowing for stationary and nonstationary long-range dependence. The model nests standard I(0) and I(1) behaviour smoothly in common factors and residuals, removing the necessity of a priori unit-root and stationarity testing...

  10. A statistical light use efficiency model explains 85% variations in global GPP

    Science.gov (United States)

    Jiang, C.; Ryu, Y.

    2016-12-01

    Photosynthesis is a complicated process whose modeling requires different levels of assumptions, simplification, and parameterization. Among models, light use efficiency (LUE) model is highly compact but powerful in monitoring gross primary production (GPP) from satellite data. Most of LUE models adopt a multiplicative from of maximum LUE, absorbed photosynthetically active radiation (APAR), and temperature and water stress functions. However, maximum LUE is a fitting parameter with large spatial variations, but most studies only use several biome dependent constants. In addition, stress functions are empirical and arbitrary in literatures. Moreover, meteorological data used are usually coarse-resolution, e.g., 1°, which could cause large errors. Finally, sunlit and shade canopy have completely different light responses but little considered. Targeting these issues, we derived a new statistical LUE model from a process-based and satellite-driven model, the Breathing Earth System Simulator (BESS). We have already derived a set of global radiation (5-km resolution), carbon and water fluxes (1-km resolution) products from 2000 to 2015 from BESS. By exploring these datasets, we found strong correlation between APAR and GPP for sunlit (R2=0.84) and shade (R2=0.96) canopy, respectively. A simple model, only driven by sunlit and shade APAR, was thus built based on linear relationships. The slopes of the linear function act as effective LUE of global ecosystem, with values of 0.0232 and 0.0128 umol C/umol quanta for sunlit and shade canopy, respectively. When compared with MPI-BGC GPP products, a global proxy of FLUXNET data, BESS-LUE achieved an overall accuracy of R2 = 0.85, whereas original BESS was R2 = 0.83 and MODIS GPP product was R2 = 0.76. We investigated spatiotemporal variations of the effective LUE. Spatially, the ratio of sunlit to shade values ranged from 0.1 (wet tropic) to 4.5 (dry inland). By using maps of sunlit and shade effective LUE the accuracy of

  11. Total-Factor Energy Efficiency in BRI Countries: An Estimation Based on Three-Stage DEA Model

    Directory of Open Access Journals (Sweden)

    Changhong Zhao

    2018-01-01

    Full Text Available The Belt and Road Initiative (BRI is showing its great influence and leadership on the international energy cooperation. Based on the three-stage DEA model, total-factor energy efficiency (TFEE in 35 BRI countries in 2015 was measured in this article. It shows that the three-stage DEA model could eliminate errors of environment variable and random, which made the result better than traditional DEA model. When environment variable errors and random errors were eliminated, the mean value of TFEE was declined. It demonstrated that TFEE of the whole sample group was overestimated because of external environment impacts and random errors. The TFEE indicators of high-income countries like South Korea, Singapore, Israel and Turkey are 1, which is in the efficiency frontier. The TFEE indicators of Russia, Saudi Arabia, Poland and China are over 0.8. And the indicators of Uzbekistan, Ukraine, South Africa and Bulgaria are in a low level. The potential of energy-saving and emissions reduction is great in countries with low TFEE indicators. Because of the gap in energy efficiency, it is necessary to distinguish different countries in the energy technology options, development planning and regulation in BRI countries.

  12. The Super‑efficiency Model and its Use for Ranking and Identification of Outliers

    Directory of Open Access Journals (Sweden)

    Kristína Kočišová

    2017-01-01

    Full Text Available This paper employs non‑radial and non‑oriented super‑efficiency SBM model under the assumption of a variable return to scale to analyse performance of twenty‑two Czech and Slovak domestic commercial banks in 2015. The banks were ranked according to asset‑oriented and profit‑oriented intermediation approach. We pooled the cross‑country data and used them to define a common best‑practice efficiency frontier. This allowed us to focus on determining relative differences in efficiency across banks. The average efficiency was evaluated separately on the “national” and “international” level. Based on the results of analysis can be seen that in Slovak banking sector the level of super‑efficiency was lower compared to Czech banks. Also, the number of super‑efficient banks was lower in a case of Slovakia under both approaches. The boxplot analysis was used to determine the outliers in the dataset. The results suggest that the exclusion of outliers led to the better statistical characteristic of estimated efficiency.

  13. Neural and hybrid modeling: an alternative route to efficiently predict the behavior of biotechnological processes aimed at biofuels obtainment.

    Science.gov (United States)

    Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele

    2014-01-01

    The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  14. Neural and Hybrid Modeling: An Alternative Route to Efficiently Predict the Behavior of Biotechnological Processes Aimed at Biofuels Obtainment

    Directory of Open Access Journals (Sweden)

    Stefano Curcio

    2014-01-01

    Full Text Available The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.

  15. A model for improving energy efficiency in industrial motor system using multicriteria analysis

    International Nuclear Information System (INIS)

    Herrero Sola, Antonio Vanderley; Mota, Caroline Maria de Miranda; Kovaleski, Joao Luiz

    2011-01-01

    In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: → Lack of decision model in industrial motor system is the main motivation of the research. → A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. → The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.

  16. A model for improving energy efficiency in industrial motor system using multicriteria analysis

    Energy Technology Data Exchange (ETDEWEB)

    Herrero Sola, Antonio Vanderley, E-mail: sola@utfpr.edu.br [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil); Mota, Caroline Maria de Miranda, E-mail: carolmm@ufpe.br [Federal University of Pernambuco, Cx. Postal 7462, CEP 50630-970, Recife (Brazil); Kovaleski, Joao Luiz [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil)

    2011-06-15

    In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: > Lack of decision model in industrial motor system is the main motivation of the research. > A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. > The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.

  17. A Novel Efficient Graph Model for the Multiple Longest Common Subsequences (MLCS Problem

    Directory of Open Access Journals (Sweden)

    Zhan Peng

    2017-08-01

    Full Text Available Searching for the Multiple Longest Common Subsequences (MLCS of multiple sequences is a classical NP-hard problem, which has been used in many applications. One of the most effective exact approaches for the MLCS problem is based on dominant point graph, which is a kind of directed acyclic graph (DAG. However, the time and space efficiency of the leading dominant point graph based approaches is still unsatisfactory: constructing the dominated point graph used by these approaches requires a huge amount of time and space, which hinders the applications of these approaches to large-scale and long sequences. To address this issue, in this paper, we propose a new time and space efficient graph model called the Leveled-DAG for the MLCS problem. The Leveled-DAG can timely eliminate all the nodes in the graph that cannot contribute to the construction of MLCS during constructing. At any moment, only the current level and some previously generated nodes in the graph need to be kept in memory, which can greatly reduce the memory consumption. Also, the final graph contains only one node in which all of the wanted MLCS are saved, thus, no additional operations for searching the MLCS are needed. The experiments are conducted on real biological sequences with different numbers and lengths respectively, and the proposed algorithm is compared with three state-of-the-art algorithms. The experimental results show that the time and space needed for the Leveled-DAG approach are smaller than those for the compared algorithms especially on large-scale and long sequences.

  18. Novel thermal efficiency-based model for determination of thermal conductivity of membrane distillation membranes

    International Nuclear Information System (INIS)

    Vanneste, Johan; Bush, John A.; Hickenbottom, Kerri L.; Marks, Christopher A.; Jassby, David

    2017-01-01

    Development and selection of membranes for membrane distillation (MD) could be accelerated if all performance-determining characteristics of the membrane could be obtained during MD operation without the need to recur to specialized or cumbersome porosity or thermal conductivity measurement techniques. By redefining the thermal efficiency, the Schofield method could be adapted to describe the flux without prior knowledge of membrane porosity, thickness, or thermal conductivity. A total of 17 commercially available membranes were analyzed in terms of flux and thermal efficiency to assess their suitability for application in MD. The thermal-efficiency based model described the flux with an average %RMSE of 4.5%, which was in the same range as the standard deviation on the measured flux. The redefinition of the thermal efficiency also enabled MD to be used as a novel thermal conductivity measurement device for thin porous hydrophobic films that cannot be measured with the conventional laser flash diffusivity technique.

  19. Health effects of home energy efficiency interventions in England: a modelling study

    Science.gov (United States)

    Milner, James; Chalabi, Zaid; Das, Payel; Jones, Benjamin; Shrubsole, Clive; Davies, Mike; Wilkinson, Paul

    2015-01-01

    Objective To assess potential public health impacts of changes to indoor air quality and temperature due to energy efficiency retrofits in English dwellings to meet 2030 carbon reduction targets. Design Health impact modelling study. Setting England. Participants English household population. Intervention Three retrofit scenarios were modelled: (1) fabric and ventilation retrofits installed assuming building regulations are met; (2) as with scenario (1) but with additional ventilation for homes at risk of poor ventilation; (3) as with scenario (1) but with no additional ventilation to illustrate the potential risk of weak regulations and non-compliance. Main outcome Primary outcomes were changes in quality adjusted life years (QALYs) over 50 years from cardiorespiratory diseases, lung cancer, asthma and common mental disorders due to changes in indoor air pollutants, including secondhand tobacco smoke, PM2.5 from indoor and outdoor sources, radon, mould, and indoor winter temperatures. Results The modelling study estimates showed that scenario (1) resulted in positive effects on net mortality and morbidity of 2241 (95% credible intervals (CI) 2085 to 2397) QALYs per 10 000 persons over 50 years follow-up due to improved temperatures and reduced exposure to indoor pollutants, despite an increase in exposure to outdoor-generated particulate matter with a diameter of 2.5 μm or less (PM2.5). Scenario (2) resulted in a negative impact of −728 (95% CI −864 to −592) QALYs per 10 000 persons over 50 years due to an overall increase in indoor pollutant exposures. Scenario (3) resulted in −539 (95% CI −678 to -399) QALYs per 10 000 persons over 50 years follow-up due to an increase in indoor exposures despite the targeting of pollutants. Conclusions If properly implemented alongside ventilation, energy efficiency retrofits in housing can improve health by reducing exposure to cold and air pollutants. Maximising the health benefits requires careful

  20. Application of Pareto-efficient combustion modeling framework to large eddy simulations of turbulent reacting flows

    Science.gov (United States)

    Wu, Hao; Ihme, Matthias

    2017-11-01

    The modeling of turbulent combustion requires the consideration of different physico-chemical processes, involving a vast range of time and length scales as well as a large number of scalar quantities. To reduce the computational complexity, various combustion models are developed. Many of them can be abstracted using a lower-dimensional manifold representation. A key issue in using such lower-dimensional combustion models is the assessment as to whether a particular combustion model is adequate in representing a certain flame configuration. The Pareto-efficient combustion (PEC) modeling framework was developed to perform dynamic combustion model adaptation based on various existing manifold models. In this work, the PEC model is applied to a turbulent flame simulation, in which a computationally efficient flamelet-based combustion model is used in together with a high-fidelity finite-rate chemistry model. The combination of these two models achieves high accuracy in predicting pollutant species at a relatively low computational cost. The relevant numerical methods and parallelization techniques are also discussed in this work.

  1. Inactivated ORF virus shows antifibrotic activity and inhibits human hepatitis B virus (HBV) and hepatitis C virus (HCV) replication in preclinical models.

    Science.gov (United States)

    Paulsen, Daniela; Urban, Andreas; Knorr, Andreas; Hirth-Dietrich, Claudia; Siegling, Angela; Volk, Hans-Dieter; Mercer, Andrew A; Limmer, Andreas; Schumak, Beatrix; Knolle, Percy; Ruebsamen-Schaeff, Helga; Weber, Olaf

    2013-01-01

    Inactivated orf virus (iORFV), strain D1701, is a potent immune modulator in various animal species. We recently demonstrated that iORFV induces strong antiviral activity in animal models of acute and chronic viral infections. In addition, we found D1701-mediated antifibrotic effects in different rat models of liver fibrosis. In the present study, we compare iORFV derived from two different strains of ORFV, D1701 and NZ2, respectively, with respect to their antifibrotic potential as well as their potential to induce an antiviral response controlling infections with the hepatotropic pathogens hepatitis C virus (HCV) and hepatitis B virus (HBV). Both strains of ORFV showed anti-viral activity against HCV in vitro and against HBV in a transgenic mouse model without signs of necro-inflammation in vivo. Our experiments suggest that the absence of liver damage is potentially mediated by iORFV-induced downregulation of antigen cross-presentation in liver sinus endothelial cells. Furthermore, both strains showed significant anti-fibrotic activity in rat models of liver fibrosis. iORFV strain NZ2 appeared more potent compared to strain D1701 with respect to both its antiviral and antifibrotic activity on the basis of dosages estimated by titration of active virus. These results show a potential therapeutic approach against two important human liver pathogens HBV and HCV that independently addresses concomitant liver fibrosis. Further studies are required to characterize the details of the mechanisms involved in this novel therapeutic principle.

  2. Inactivated ORF virus shows antifibrotic activity and inhibits human hepatitis B virus (HBV and hepatitis C virus (HCV replication in preclinical models.

    Directory of Open Access Journals (Sweden)

    Daniela Paulsen

    Full Text Available Inactivated orf virus (iORFV, strain D1701, is a potent immune modulator in various animal species. We recently demonstrated that iORFV induces strong antiviral activity in animal models of acute and chronic viral infections. In addition, we found D1701-mediated antifibrotic effects in different rat models of liver fibrosis. In the present study, we compare iORFV derived from two different strains of ORFV, D1701 and NZ2, respectively, with respect to their antifibrotic potential as well as their potential to induce an antiviral response controlling infections with the hepatotropic pathogens hepatitis C virus (HCV and hepatitis B virus (HBV. Both strains of ORFV showed anti-viral activity against HCV in vitro and against HBV in a transgenic mouse model without signs of necro-inflammation in vivo. Our experiments suggest that the absence of liver damage is potentially mediated by iORFV-induced downregulation of antigen cross-presentation in liver sinus endothelial cells. Furthermore, both strains showed significant anti-fibrotic activity in rat models of liver fibrosis. iORFV strain NZ2 appeared more potent compared to strain D1701 with respect to both its antiviral and antifibrotic activity on the basis of dosages estimated by titration of active virus. These results show a potential therapeutic approach against two important human liver pathogens HBV and HCV that independently addresses concomitant liver fibrosis. Further studies are required to characterize the details of the mechanisms involved in this novel therapeutic principle.

  3. Spatial Heterodyne Observations of Water (SHOW) vapour in the upper troposphere and lower stratosphere from a high altitude aircraft: Modelling and sensitivity analysis

    Science.gov (United States)

    Langille, J. A.; Letros, D.; Zawada, D.; Bourassa, A.; Degenstein, D.; Solheim, B.

    2018-04-01

    A spatial heterodyne spectrometer (SHS) has been developed to measure the vertical distribution of water vapour in the upper troposphere and the lower stratosphere with a high vertical resolution (∼500 m). The Spatial Heterodyne Observations of Water (SHOW) instrument combines an imaging system with a monolithic field-widened SHS to observe limb scattered sunlight in a vibrational band of water (1363 nm-1366 nm). The instrument has been optimized for observations from NASA's ER-2 aircraft as a proof-of-concept for a future low earth orbit satellite deployment. A robust model has been developed to simulate SHOW ER-2 limb measurements and retrievals. This paper presents the simulation of the SHOW ER-2 limb measurements along a hypothetical flight track and examines the sensitivity of the measurement and retrieval approach. Water vapour fields from an Environment and Climate Change Canada forecast model are used to represent realistic spatial variability along the flight path. High spectral resolution limb scattered radiances are simulated using the SASKTRAN radiative transfer model. It is shown that the SHOW instrument onboard the ER-2 is capable of resolving the water vapour variability in the UTLS from approximately 12 km - 18 km with ±1 ppm accuracy. Vertical resolutions between 500 m and 1 km are feasible. The along track sampling capability of the instrument is also discussed.

  4. A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls

    Directory of Open Access Journals (Sweden)

    Arun Arjunan

    2015-08-01

    Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.

  5. The field experiments and model of the natural dust deposition effects on photovoltaic module efficiency.

    Science.gov (United States)

    Jaszczur, Marek; Teneta, Janusz; Styszko, Katarzyna; Hassan, Qusay; Burzyńska, Paulina; Marcinek, Ewelina; Łopian, Natalia

    2018-04-20

    The maximisation of the efficiency of the photovoltaic system is crucial in order to increase the competitiveness of this technology. Unfortunately, several environmental factors in addition to many alterable and unalterable factors can significantly influence the performance of the PV system. Some of the environmental factors that depend on the site have to do with dust, soiling and pollutants. In this study conducted in the city centre of Kraków, Poland, characterised by high pollution and low wind speed, the focus is on the evaluation of the degradation of efficiency of polycrystalline photovoltaic modules due to natural dust deposition. The experimental results that were obtained demonstrated that deposited dust-related efficiency loss gradually increased with the mass and that it follows the exponential. The maximum dust deposition density observed for rainless exposure periods of 1 week exceeds 300 mg/m 2 and the results in efficiency loss were about 2.1%. It was observed that efficiency loss is not only mass-dependent but that it also depends on the dust properties. The small positive effect of the tiny dust layer which slightly increases in surface roughness on the module performance was also observed. The results that were obtained enable the development of a reliable model for the degradation of the efficiency of the PV module caused by dust deposition. The novelty consists in the model, which is easy to apply and which is dependent on the dust mass, for low and moderate naturally deposited dust concentration (up to 1 and 5 g/m 2 and representative for many geographical regions) and which is applicable to the majority of cases met in an urban and non-urban polluted area can be used to evaluate the dust deposition-related derating factor (efficiency loss), which is very much sought after by the system designers, and tools used for computer modelling and system malfunction detection.

  6. Trust models for efficient communication in Mobile Cloud Computing and their applications to e-Commerce

    Science.gov (United States)

    Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos

    2016-11-01

    Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns

  7. Efficient Multi-Valued Bounded Model Checking for LTL over Quasi-Boolean Algebras

    OpenAIRE

    Andrade, Jefferson O.; Kameyama, Yukiyoshi

    2012-01-01

    Multi-valued Model Checking extends classical, two-valued model checking to multi-valued logic such as Quasi-Boolean logic. The added expressivity is useful in dealing with such concepts as incompleteness and uncertainty in target systems, while it comes with the cost of time and space. Chechik and others proposed an efficient reduction from multi-valued model checking problems to two-valued ones, but to the authors' knowledge, no study was done for multi-valued bounded model checking. In thi...

  8. Semiparametric Gaussian copula models : Geometry and efficient rank-based estimation

    NARCIS (Netherlands)

    Segers, J.; van den Akker, R.; Werker, B.J.M.

    2014-01-01

    We propose, for multivariate Gaussian copula models with unknown margins and structured correlation matrices, a rank-based, semiparametrically efficient estimator for the Euclidean copula parameter. This estimator is defined as a one-step update of a rank-based pilot estimator in the direction of

  9. An efficient fluid–structure interaction model for optimizing twistable flapping wings

    NARCIS (Netherlands)

    Wang, Q.; Goosen, J.F.L.; van Keulen, A.

    2017-01-01

    Spanwise twist can dominate the deformation of flapping wings and alters the aerodynamic performance and power efficiency of flapping wings by changing the local angle of attack. Traditional Fluid–Structure Interaction (FSI) models, based on Computational Structural Dynamics (CSD) and

  10. Design, characterization and modelling of high efficient solar powered lighting systems

    DEFF Research Database (Denmark)

    Svane, Frederik; Nymann, Peter; Poulsen, Peter Behrensdorff

    2016-01-01

    This paper discusses some of the major challenges in the development of L2L (Light-2-Light) products. It’s the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report...

  11. 78 FR 35073 - Compass Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application

    Science.gov (United States)

    2013-06-11

    ... Balanced Fund, Compass EMP Multi-Asset Growth Fund, Compass EMP Alternative Strategies Fund, Compass EMP Balanced Volatility Weighted Fund, Compass EMP Growth Volatility Weighted Fund, and Compass EMP... Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application June 4, 2013. AGENCY...

  12. A hybrid version of swan for fast and efficient practical wave modelling

    NARCIS (Netherlands)

    M. Genseberger (Menno); J. Donners

    2016-01-01

    htmlabstractIn the Netherlands, for coastal and inland water applications, wave modelling with SWAN has become a main ingredient. However, computational times are relatively high. Therefore we investigated the parallel efficiency of the current MPI and OpenMP versions of SWAN. The MPI version is

  13. Navigational efficiency in a biased and correlated random walk model of individual animal movement.

    Science.gov (United States)

    Bailey, Joseph D; Wallis, Jamie; Codling, Edward A

    2018-01-01

    Understanding how an individual animal is able to navigate through its environment is a key question in movement ecology that can give insight into observed movement patterns and the mechanisms behind them. Efficiency of navigation is important for behavioral processes at a range of different spatio-temporal scales, including foraging and migration. Random walk models provide a standard framework for modeling individual animal movement and navigation. Here we consider a vector-weighted biased and correlated random walk (BCRW) model for directed movement (taxis), where external navigation cues are balanced with forward persistence. We derive a mathematical approximation of the expected navigational efficiency for any BCRW of this form and confirm the model predictions using simulations. We demonstrate how the navigational efficiency is related to the weighting given to forward persistence and external navigation cues, and highlight the counter-intuitive result that for low (but realistic) levels of error on forward persistence, a higher navigational efficiency is achieved by giving more weighting to this indirect navigation cue rather than direct navigational cues. We discuss and interpret the relevance of these results for understanding animal movement and navigation strategies. © 2017 by the Ecological Society of America.

  14. Compilation Of An Econometric Human Resource Efficiency Model For Project Management Best Practices

    OpenAIRE

    G. van Zyl; P. Venier

    2006-01-01

    The aim of the paper is to introduce a human resource efficiency model in order to rank the most important human resource driving forces for project management best practices. The results of the model will demonstrate how the human resource component of project management acts as the primary function to enhance organizational performance, codified through improved logical end-state programmes, work ethics and process contributions. Given the hypothesis that project management best practices i...

  15. Efficient Parameterization for Grey-box Model Identification of Complex Physical Systems

    DEFF Research Database (Denmark)

    Blanke, Mogens; Knudsen, Morten Haack

    2006-01-01

    Grey box model identification preserves known physical structures in a model but with limits to the possible excitation, all parameters are rarely identifiable, and different parametrizations give significantly different model quality. Convenient methods to show which parameterizations are the be...... that need be constrained to achieve satisfactory convergence. Identification of nonlinear models for a ship illustrate the concept....

  16. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    Science.gov (United States)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  17. An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Application

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  18. Efficient Blind System Identification of Non-Gaussian Auto-Regressive Models with HMM Modeling of the Excitation

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2007-01-01

    We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussi...... outputs. The signal models are general and suitable to numerous important signals, such as speech signals and base-band communication signals. Applications to speech analysis and blind channel equalization are given to exemplify the efficiency of the new methods....

  19. Shifting Landscapes: The Impact of Centralized and Decentralized Nursing Station Models on the Efficiency of Care.

    Science.gov (United States)

    Fay, Lindsey; Carll-White, Allison; Schadler, Aric; Isaacs, Kathy B; Real, Kevin

    2017-10-01

    The focus of this research was to analyze the impact of decentralized and centralized hospital design layouts on the delivery of efficient care and the resultant level of caregiver satisfaction. An interdisciplinary team conducted a multiphased pre- and postoccupancy evaluation of a cardiovascular service line in an academic hospital that moved from a centralized to decentralized model. This study examined the impact of walkability, room usage, allocation of time, and visibility to better understand efficiency in the care environment. A mixed-methods data collection approach was utilized, which included pedometer measurements of staff walking distances, room usage data, time studies in patient rooms and nurses' stations, visibility counts, and staff questionnaires yielding qualitative and quantitative results. Overall, the data comparing the centralized and decentralized models yielded mixed results. This study's centralized design was rated significantly higher in its ability to support teamwork and efficient patient care with decreased staff walking distances. The decentralized unit design was found to positively influence proximity to patients in a larger design footprint and contribute to increased visits to and time spent in patient rooms. Among the factors contributing to caregiver efficiency and satisfaction are nursing station design, an integrated team approach, and the overall physical layout of the space on walkability, allocation of caregiver time, and visibility. However, unit design alone does not solely impact efficiency, suggesting that designers must consider the broader implications of a culture of care and processes.

  20. A theoretical model for prediction of deposition efficiency in cold spraying

    International Nuclear Information System (INIS)

    Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.

    2005-01-01

    The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle

  1. Evaluation model of wind energy resources and utilization efficiency of wind farm

    Science.gov (United States)

    Ma, Jie

    2018-04-01

    Due to the large amount of abandoned winds in wind farms, the establishment of a wind farm evaluation model is particularly important for the future development of wind farms In this essay, consider the wind farm's wind energy situation, Wind Energy Resource Model (WERM) and Wind Energy Utilization Efficiency Model(WEUEM) are established to conduct a comprehensive assessment of the wind farm. Wind Energy Resource Model (WERM) contains average wind speed, average wind power density and turbulence intensity, which assessed wind energy resources together. Based on our model, combined with the actual measurement data of a wind farm, calculate the indicators using the model, and the results are in line with the actual situation. We can plan the future development of the wind farm based on this result. Thus, the proposed establishment approach of wind farm assessment model has application value.

  2. Towards Improving the Efficiency of Bayesian Model Averaging Analysis for Flow in Porous Media via the Probabilistic Collocation Method

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-04-01

    Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.

  3. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  4. Efficient Implementation of Solvers for Linear Model Predictive Control on Embedded Devices

    DEFF Research Database (Denmark)

    Frison, Gianluca; Kwame Minde Kufoalor, D.; Imsland, Lars

    2014-01-01

    This paper proposes a novel approach for the efficient implementation of solvers for linear MPC on embedded devices. The main focus is to explain in detail the approach used to optimize the linear algebra for selected low-power embedded devices, and to show how the high-performance implementation...

  5. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.

    2017-06-09

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  6. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.; Chen, Huangxin; Sun, Shuyu

    2017-01-01

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  7. Model of Efficiency Assessment of Regulation In The Banking Seсtor

    Directory of Open Access Journals (Sweden)

    Irina V. Larionova

    2014-01-01

    Full Text Available In this article, the modern system of regulation of the national banking sector is viewed, which, according to the author, needs theoretical judgment, structuring, disclosure of the maintenance of efficiency of functioning is considered. The system of regulation reveals on a system basis, it is offered to consider it as set of elements and the mechanism of their interaction which are formed taking into account target reference points of regulation. Thus it is emphasized that for regulation the contradiction is concluded: achievement of financial stability of functioning of the banking sector, as a rule, contains economic growth. The need for development of theoretical ideas of efficiency of regulation of the banking sector gains special relevance taking into account the latest events connected with revocation of licenses of commercial banks on implementation of bank activity, the high cost of credit resources for managing subjects, an insignificant contribution of the banking sector to ensuring rates of economic growth. The author offered criteria of efficiency of regulation of the banking sector to which are referred: functional, operational, social, and economic efficiency. Functional efficiency opens ability of each subsystem of regulation to carry out the functions ordered by the law. Operational efficiency describes correctness suffered by the regulator and commercial banks of the expenses connected with regulating influence. At last, social and economic efficiency is connected with degree of compliance of a field of activity of the banking sector to requirements of national economy, and responsibility of banking business before society. For each criterion of efficiency of regulation of the banking sector the set of the quantitative and quality indicators, allowing to give the corresponding assessment of the working model of crediting is offered. The aggregated expert assessment of the Russian system of regulation of the banking sector

  8. Efficiency of lipofection combined with hyperthermia in Lewis lung carcinoma cells and a rodent pleural dissemination model of lung carcinoma.

    Science.gov (United States)

    Okita, Atsushi; Mushiake, Hiroyuki; Tsukuda, Kazunori; Aoe, Motoi; Murakami, Masakazu; Andou, Akio; Shimizu, Nobuyoshi

    2004-06-01

    We have previously reported that hyperthermia at 41 degrees C enhanced lipofection-mediated gene transduction into cultured cells. In this study, we adapted hyperthermia technique to novel cationic liposome (Lipofectamine 2000) mediated gene transfection into Lewis lung carcinoma cells in vitro and in vivo. In vitro, transfection efficiencies were 38.9+/-3.3% by lipofection alone and 52.1+/-2.6% by lipofection with hyperthermia for 30 min, and 62.5+/-5.5% and 81.4+/-3.2% for 1 h, respectively. Hyperthermia significantly enhanced gene transfection efficiency 1.2-1.4 times more than that with lipofection only. We also evaluated the effect of hyperthermia with a pleural dissemination model of lung carcinoma of mice. We developed a model which was well-tolerated with hyperthermia with lipofection by the mice. In spite of repeated treatments, transfection efficiencies were very low and we could not show the augmentation of gene transfection by hyperthermia. Though Lipofectamine 2000 showed strong gene transduction effect and hyperthermia augmented its effect in vitro, further evaluation is needed to adapt both techniques in vivo.

  9. Monitoring Crop Productivity over the U.S. Corn Belt using an Improved Light Use Efficiency Model

    Science.gov (United States)

    Wu, X.; Xiao, X.; Zhang, Y.; Qin, Y.; Doughty, R.

    2017-12-01

    Large-scale monitoring of crop yield is of great significance for forecasting food production and prices and ensuring food security. Satellite data that provide temporally and spatially continuous information that by themselves or in combination with other data or models, raises possibilities to monitor and understand agricultural productivity regionally. In this study, we first used an improved light use efficiency model-Vegetation Photosynthesis Model (VPM) to simulate the gross primary production (GPP). Model evaluation showed that the simulated GPP (GPPVPM) could well captured the spatio-temporal variation of GPP derived from FLUXNET sites. Then we applied the GPPVPM to further monitor crop productivity for corn and soybean over the U.S. Corn Belt and benchmarked with county-level crop yield statistics. We found VPM-based approach provides pretty good estimates (R2 = 0.88, slope = 1.03). We further showed the impacts of climate extremes on the crop productivity and carbon use efficiency. The study indicates the great potential of VPM in estimating crop yield and in understanding of crop yield responses to climate variability and change.

  10. Numerical modelling of high efficiency InAs/GaAs intermediate band solar cell

    Science.gov (United States)

    Imran, Ali; Jiang, Jianliang; Eric, Debora; Yousaf, Muhammad

    2018-01-01

    Quantum Dots (QDs) intermediate band solar cells (IBSC) are the most attractive candidates for the next generation of photovoltaic applications. In this paper, theoretical model of InAs/GaAs device has been proposed, where we have calculated the effect of variation in the thickness of intrinsic and IB layer on the efficiency of the solar cell using detailed balance theory. IB energies has been optimized for different IB layers thickness. Maximum efficiency 46.6% is calculated for IB material under maximum optical concentration.

  11. Dynamic modeling and verification of an energy-efficient greenhouse with an aquaponic system using TRNSYS

    Science.gov (United States)

    Amin, Majdi Talal

    Currently, there is no integrated dynamic simulation program for an energy efficient greenhouse coupled with an aquaponic system. This research is intended to promote the thermal management of greenhouses in order to provide sustainable food production with the lowest possible energy use and material waste. A brief introduction of greenhouses, passive houses, energy efficiency, renewable energy systems, and their applications are included for ready reference. An experimental working scaled-down energy-efficient greenhouse was built to verify and calibrate the results of a dynamic simulation model made using TRNSYS software. However, TRNSYS requires the aid of Google SketchUp to develop 3D building geometry. The simulation model was built following the passive house standard as closely as possible. The new simulation model was then utilized to design an actual greenhouse with Aquaponics. It was demonstrated that the passive house standard can be applied to improve upon conventional greenhouse performance, and that it is adaptable to different climates. The energy-efficient greenhouse provides the required thermal environment for fish and plant growth, while eliminating the need for conventional cooling and heating systems.

  12. A resource allocation model to support efficient air quality management in South Africa

    Directory of Open Access Journals (Sweden)

    U Govender

    2009-06-01

    Full Text Available Research into management interventions that create the required enabling environment for growth and development in South Africa are both timely and appropriate. In the research reported in this paper, the authors investigated the level of efficiency of the Air Quality Units within the three spheres of government viz. National, Provincial, and Local Departments of Environmental Management in South Africa, with the view to develop a resource allocation model. The inputs to the model were calculated from the actual man-hours spent on twelve selected activities relating to project management, knowledge management and change management. The outputs assessed were aligned to the requirements of the mandates of these Departments. Several models were explored using multiple regressions and stepwise techniques. The model that best explained the efficiency of the organisations from the input data was selected. Logistic regression analysis was identified as the most appropriate tool. This model is used to predict the required resources per Air Quality Unit in the different spheres of government in an attempt at supporting and empowering the air quality regime to achieve improved output efficiency.

  13. A computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi

    2016-11-15

    Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.

  14. Jet formation and equatorial superrotation in Jupiter's atmosphere: Numerical modelling using a new efficient parallel code

    Science.gov (United States)

    Rivier, Leonard Gilles

    Using an efficient parallel code solving the primitive equations of atmospheric dynamics, the jet structure of a Jupiter like atmosphere is modeled. In the first part of this thesis, a parallel spectral code solving both the shallow water equations and the multi-level primitive equations of atmospheric dynamics is built. The implementation of this code called BOB is done so that it runs effectively on an inexpensive cluster of workstations. A one dimensional decomposition and transposition method insuring load balancing among processes is used. The Legendre transform is cache-blocked. A "compute on the fly" of the Legendre polynomials used in the spectral method produces a lower memory footprint and enables high resolution runs on relatively small memory machines. Performance studies are done using a cluster of workstations located at the National Center for Atmospheric Research (NCAR). BOB performances are compared to the parallel benchmark code PSTSWM and the dynamical core of NCAR's CCM3.6.6. In both cases, the comparison favors BOB. In the second part of this thesis, the primitive equation version of the code described in part I is used to study the formation of organized zonal jets and equatorial superrotation in a planetary atmosphere where the parameters are chosen to best model the upper atmosphere of Jupiter. Two levels are used in the vertical and only large scale forcing is present. The model is forced towards a baroclinically unstable flow, so that eddies are generated by baroclinic instability. We consider several types of forcing, acting on either the temperature or the momentum field. We show that only under very specific parametric conditions, zonally elongated structures form and persist resembling the jet structure observed near the cloud level top (1 bar) on Jupiter. We also study the effect of an equatorial heat source, meant to be a crude representation of the effect of the deep convective planetary interior onto the outer atmospheric layer. We

  15. Modeling of non-linear CHP efficiency curves in distributed energy systems

    DEFF Research Database (Denmark)

    Milan, Christian; Stadler, Michael; Cardoso, Gonçalo

    2015-01-01

    Distributed energy resources gain an increased importance in commercial and industrial building design. Combined heat and power (CHP) units are considered as one of the key technologies for cost and emission reduction in buildings. In order to make optimal decisions on investment and operation...... for these technologies, detailed system models are needed. These models are often formulated as linear programming problems to keep computational costs and complexity in a reasonable range. However, CHP systems involve variations of the efficiency for large nameplate capacity ranges and in case of part load operation......, which can be even of non-linear nature. Since considering these characteristics would turn the models into non-linear problems, in most cases only constant efficiencies are assumed. This paper proposes possible solutions to address this issue. For a mixed integer linear programming problem two...

  16. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  17. True coincidence summing correction and mathematical efficiency modeling of a well detector

    Energy Technology Data Exchange (ETDEWEB)

    Jäderström, H., E-mail: henrik.jaderstrom@canberra.com [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Mueller, W.F. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Atrashkevich, V. [Stroitely St 4-4-52, Moscow (Russian Federation); Adekola, A.S. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-06-01

    True coincidence summing (TCS) occurs when two or more photons are emitted from the same decay of a radioactive nuclide and are detected within the resolving time of the gamma ray detector. TCS changes the net peak areas of the affected full energy peaks in the spectrum and the nuclide activity is rendered inaccurate if no correction is performed. TCS is independent of the count rate, but it is strongly dependent on the peak and total efficiency, as well as the characteristics of a given nuclear decay. The TCS effects are very prominent for well detectors because of the high efficiencies, and make accounting for TCS a necessity. For CANBERRA's recently released Small Anode Germanium (SAGe) well detector, an extension to CANBERRA's mathematical efficiency calibration method (In Situ Object Calibration Software or ISOCS, and Laboratory SOurceless Calibration Software or LabSOCS) has been developed that allows for calculation of peak and total efficiencies for SAGe well detectors. The extension also makes it possible to calculate TCS corrections for well detectors using the standard algorithm provided with CANBERRAS's Spectroscopy software Genie 2000. The peak and total efficiencies from ISOCS/LabSOCS have been compared to MCNP with agreements within 3% for peak efficiencies and 10% for total efficiencies for energies above 30 keV. A sample containing Ra-226 daughters has been measured within the well and analyzed with and without TCS correction and applying the correction factor shows significant improvement of the activity determination for the energy range 46–2447 keV. The implementation of ISOCS/LabSOCS for well detectors offers a powerful tool for efficiency calibration for these detectors. The automated algorithm to correct for TCS effects in well detectors makes nuclide specific calibration unnecessary and offers flexibility in carrying out gamma spectral analysis.

  18. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  19. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can g