Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F Saverio; Cossu, Giulio; Elvassore, Nicola
2016-12-01
: Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210-240 µm vs. 40-70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from
Energy Technology Data Exchange (ETDEWEB)
Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)
2011-12-15
This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.
Distinct iPS Cells Show Different Cardiac Differentiation Efficiency.
Ohno, Yohei; Yuasa, Shinsuke; Egashira, Toru; Seki, Tomohisa; Hashimoto, Hisayuki; Tohyama, Shugo; Saito, Yuki; Kunitomi, Akira; Shimoji, Kenichiro; Onizuka, Takeshi; Kageyama, Toshimi; Yae, Kojiro; Tanaka, Tomofumi; Kaneda, Ruri; Hattori, Fumiyuki; Murata, Mitsushige; Kimura, Kensuke; Fukuda, Keiichi
2013-01-01
Patient-specific induced pluripotent stem (iPS) cells can be generated by introducing transcription factors that are highly expressed in embryonic stem (ES) cells into somatic cells. This opens up new possibilities for cell transplantation-based regenerative medicine by overcoming the ethical issues and immunological problems associated with ES cells. Despite the development of various methods for the generation of iPS cells that have resulted in increased efficiency, safety, and general versatility, it remains unknown which types of iPS cells are suitable for clinical use. Therefore, the aims of the present study were to assess (1) the differentiation potential, time course, and efficiency of different types of iPS cell lines to differentiate into cardiomyocytes in vitro and (2) the properties of the iPS cell-derived cardiomyocytes. We found that high-quality iPS cells exhibited better cardiomyocyte differentiation in terms of the time course and efficiency of differentiation than low-quality iPS cells, which hardly ever differentiated into cardiomyocytes. Because of the different properties of the various iPS cell lines such as cardiac differentiation efficiency and potential safety hazards, newly established iPS cell lines must be characterized prior to their use in cardiac regenerative medicine.
Distinct iPS Cells Show Different Cardiac Differentiation Efficiency
Directory of Open Access Journals (Sweden)
Yohei Ohno
2013-01-01
Full Text Available Patient-specific induced pluripotent stem (iPS cells can be generated by introducing transcription factors that are highly expressed in embryonic stem (ES cells into somatic cells. This opens up new possibilities for cell transplantation-based regenerative medicine by overcoming the ethical issues and immunological problems associated with ES cells. Despite the development of various methods for the generation of iPS cells that have resulted in increased efficiency, safety, and general versatility, it remains unknown which types of iPS cells are suitable for clinical use. Therefore, the aims of the present study were to assess (1 the differentiation potential, time course, and efficiency of different types of iPS cell lines to differentiate into cardiomyocytes in vitro and (2 the properties of the iPS cell-derived cardiomyocytes. We found that high-quality iPS cells exhibited better cardiomyocyte differentiation in terms of the time course and efficiency of differentiation than low-quality iPS cells, which hardly ever differentiated into cardiomyocytes. Because of the different properties of the various iPS cell lines such as cardiac differentiation efficiency and potential safety hazards, newly established iPS cell lines must be characterized prior to their use in cardiac regenerative medicine.
SPAR Model Structural Efficiencies
Energy Technology Data Exchange (ETDEWEB)
John Schroeder; Dan Henry
2013-04-01
The Nuclear Regulatory Commission (NRC) and the Electric Power Research Institute (EPRI) are supporting initiatives aimed at improving the quality of probabilistic risk assessments (PRAs). Included in these initiatives are the resolution of key technical issues that are have been judged to have the most significant influence on the baseline core damage frequency of the NRC’s Standardized Plant Analysis Risk (SPAR) models and licensee PRA models. Previous work addressed issues associated with support system initiating event analysis and loss of off-site power/station blackout analysis. The key technical issues were: • Development of a standard methodology and implementation of support system initiating events • Treatment of loss of offsite power • Development of standard approach for emergency core cooling following containment failure Some of the related issues were not fully resolved. This project continues the effort to resolve outstanding issues. The work scope was intended to include substantial collaboration with EPRI; however, EPRI has had other higher priority initiatives to support. Therefore this project has addressed SPAR modeling issues. The issues addressed are • SPAR model transparency • Common cause failure modeling deficiencies and approaches • Ac and dc modeling deficiencies and approaches • Instrumentation and control system modeling deficiencies and approaches
ShowFlow: A practical interface for groundwater modeling
Energy Technology Data Exchange (ETDEWEB)
Tauxe, J.D.
1990-12-01
ShowFlow was created to provide a user-friendly, intuitive environment for researchers and students who use computer modeling software. What traditionally has been a workplace available only to those familiar with command-line based computer systems is now within reach of almost anyone interested in the subject of modeling. In the case of this edition of ShowFlow, the user can easily experiment with simulations using the steady state gaussian plume groundwater pollutant transport model SSGPLUME, though ShowFlow can be rewritten to provide a similar interface for any computer model. Included in this thesis is all the source code for both the ShowFlow application for Microsoft{reg sign} Windows{trademark} and the SSGPLUME model, a User's Guide, and a Developer's Guide for converting ShowFlow to run other model programs. 18 refs., 13 figs.
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2016-12-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2017-02-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
A Solved Model to Show Insufficiency of Quantitative Adiabatic Condition
Institute of Scientific and Technical Information of China (English)
LIU Long-Jiang; LIU Yu-Zhen; TONG Dian-Min
2009-01-01
The adiabatic theorem is a useful tool in processing quantum systems slowly evolving,but its practical application depends on the quantitative condition expressed by Hamiltonian's eigenvalues and eigenstates,which is usually taken as a sufficient condition.Recently,the sumciency of the condition was questioned,and several counterex amples have been reported.Here we present a new solved model to show the insufficiency of the traditional quantitative adiabatic condition.
Directory of Open Access Journals (Sweden)
Yu Liu
Full Text Available BACKGROUND: Esterases with excellent merits suitable for commercial use in ester production field are still insufficient. The aim of this research is to advance our understanding by seeking for more unusual esterases and revealing their characterizations for ester synthesis. METHODOLOGY/PRINCIPAL FINDINGS: A novel esterase-encoding gene from Rhizomucor miehei (RmEstA was cloned and expressed in Escherichia coli. Sequence analysis revealed a 975-bp ORF encoding a 324-amino-acid polypeptide belonging to the hormone-sensitive lipase (HSL family IV and showing highest similarity (44% to the Paenibacillus mucilaginosus esterase/lipase. Recombinant RmEstA was purified to homogeneity: it was 34 kDa by SDS-PAGE and showed optimal pH and temperature of 6.5 and 45°C, respectively. The enzyme was stable to 50°C, under a broad pH range (5.0-10.6. RmEstA exhibited broad substrate specificity toward p-nitrophenol esters and short-acyl-chain triglycerols, with highest activities (1,480 U mg(-1 and 228 U mg(-1 for p-nitrophenyl hexanoate and tributyrin, respectively. RmEstA efficiently synthesized butyl butyrate (92% conversion yield when immobilized on AOT-based organogel. CONCLUSION: RmEstA has great potential for industrial applications. RmEstA is the first reported esterase from Rhizomucor miehei.
Efficient Global Aerodynamic Modeling from Flight Data
Morelli, Eugene A.
2012-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
Showing that the race model inequality is not violated
DEFF Research Database (Denmark)
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...
Showing that the race model inequality is not violated
DEFF Research Database (Denmark)
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...... the Type I error if a theory predicts that the race model prediction holds in a given experimental condition. © 2011 Psychonomic Society, Inc....
Showing that the race model inequality is not violated
DEFF Research Database (Denmark)
Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul
2012-01-01
important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...... the Type I error if a theory predicts that the race model prediction holds in a given experimental condition. © 2011 Psychonomic Society, Inc....
Showing Automatically Generated Students' Conceptual Models to Students and Teachers
Perez-Marin, Diana; Pascual-Nieto, Ismael
2010-01-01
A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…
Efficient Turbulence Modeling for CFD Wake Simulations
DEFF Research Database (Denmark)
van der Laan, Paul
, that can accurately and efficiently simulate wind turbine wakes. The linear k-ε eddy viscosity model (EVM) is a popular turbulence model in RANS; however, it underpredicts the velocity wake deficit and cannot predict the anisotropic Reynolds-stresses in the wake. In the current work, nonlinear eddy...... viscosity models (NLEVM) are applied to wind turbine wakes. NLEVMs can model anisotropic turbulence through a nonlinear stress-strain relation, and they can improve the velocity deficit by the use of a variable eddy viscosity coefficient, that delays the wake recovery. Unfortunately, all tested NLEVMs show...... numerically unstable behavior for fine grids, which inhibits a grid dependency study for numerical verification. Therefore, a simpler EVM is proposed, labeled as the k-ε - fp EVM, that has a linear stress-strain relation, but still has a variable eddy viscosity coefficient. The k-ε - fp EVM is numerically...
Mars, Tomaz; Strazisar, Marusa; Mis, Katarina; Kotnik, Nejc; Pegan, Katarina; Lojk, Jasna; Grubic, Zoran; Pavlin, Mojca
2015-04-01
Transfection of primary human myoblasts offers the possibility to study mechanisms that are important for muscle regeneration and gene therapy of muscle disease. Cultured human myoblasts were selected here because muscle cells still proliferate at this developmental stage, which might have several advantages in gene therapy. Gene therapy is one of the most sought-after tools in modern medicine. Its progress is, however, limited due to the lack of suitable gene transfer techniques. To obtain better insight into the transfection potential of the presently used techniques, two non-viral transfection methods--lipofection and electroporation--were compared. The parameters that can influence transfection efficiency and cell viability were systematically approached and compared. Cultured myoblasts were transfected with the pEGFP-N1 plasmid either using Lipofectamine 2000 or with electroporation. Various combinations for the preparation of the lipoplexes and the electroporation media, and for the pulsing protocols, were tested and compared. Transfection efficiency and cell viability were inversely proportional for both approaches. The appropriate ratio of Lipofectamine and plasmid DNA provides optimal conditions for lipofection, while for electroporation, RPMI medium and a pulsing protocol using eight pulses of 2 ms at E = 0.8 kV/cm proved to be the optimal combination. The transfection efficiencies for the optimal lipofection and optimal electrotransfection protocols were similar (32 vs. 32.5%, respectively). Both of these methods are effective for transfection of primary human myoblasts; however, electroporation might be advantageous for in vivo application to skeletal muscle.
Modelling water uptake efficiency of root systems
Leitner, Daniel; Tron, Stefania; Schröder, Natalie; Bodner, Gernot; Javaux, Mathieu; Vanderborght, Jan; Vereecken, Harry; Schnepf, Andrea
2016-04-01
Water uptake is crucial for plant productivity. Trait based breeding for more water efficient crops will enable a sustainable agricultural management under specific pedoclimatic conditions, and can increase drought resistance of plants. Mathematical modelling can be used to find suitable root system traits for better water uptake efficiency defined as amount of water taken up per unit of root biomass. This approach requires large simulation times and large number of simulation runs, since we test different root systems under different pedoclimatic conditions. In this work, we model water movement by the 1-dimensional Richards equation with the soil hydraulic properties described according to the van Genuchten model. Climatic conditions serve as the upper boundary condition. The root system grows during the simulation period and water uptake is calculated via a sink term (after Tron et al. 2015). The goal of this work is to compare different free software tools based on different numerical schemes to solve the model. We compare implementations using DUMUX (based on finite volumes), Hydrus 1D (based on finite elements), and a Matlab implementation of Van Dam, J. C., & Feddes 2000 (based on finite differences). We analyse the methods for accuracy, speed and flexibility. Using this model case study, we can clearly show the impact of various root system traits on water uptake efficiency. Furthermore, we can quantify frequent simplifications that are introduced in the modelling step like considering a static root system instead of a growing one, or considering a sink term based on root density instead of considering the full root hydraulic model (Javaux et al. 2008). References Tron, S., Bodner, G., Laio, F., Ridolfi, L., & Leitner, D. (2015). Can diversity in root architecture explain plant water use efficiency? A modeling study. Ecological modelling, 312, 200-210. Van Dam, J. C., & Feddes, R. A. (2000). Numerical simulation of infiltration, evaporation and shallow
Efficient Model-Based Exploration
Wiering, M.A.; Schmidhuber, J.
1998-01-01
Model-Based Reinforcement Learning (MBRL) can greatly profit from using world models for estimating the consequences of selecting particular actions: an animat can construct such a model from its experiences and use it for computing rewarding behavior. We study the problem of collecting useful exper
Efficient Computational Model of Hysteresis
Shields, Joel
2005-01-01
A recently developed mathematical model of the output (displacement) versus the input (applied voltage) of a piezoelectric transducer accounts for hysteresis. For the sake of computational speed, the model is kept simple by neglecting the dynamic behavior of the transducer. Hence, the model applies to static and quasistatic displacements only. A piezoelectric transducer of the type to which the model applies is used as an actuator in a computer-based control system to effect fine position adjustments. Because the response time of the rest of such a system is usually much greater than that of a piezoelectric transducer, the model remains an acceptably close approximation for the purpose of control computations, even though the dynamics are neglected. The model (see Figure 1) represents an electrically parallel, mechanically series combination of backlash elements, each having a unique deadband width and output gain. The zeroth element in the parallel combination has zero deadband width and, hence, represents a linear component of the input/output relationship. The other elements, which have nonzero deadband widths, are used to model the nonlinear components of the hysteresis loop. The deadband widths and output gains of the elements are computed from experimental displacement-versus-voltage data. The hysteresis curve calculated by use of this model is piecewise linear beyond deadband limits.
Marine actinobacteria showing phosphate-solubilizing efficiency in Chorao Island, Goa, India.
Dastager, Syed G; Damare, Samir
2013-05-01
The occurrence and distribution of an actinobacteria group of bacteria capable of dissolving insoluble phosphates were investigated in this study in marine environments, especially in sediments of Chorao Island, Goa Province, India. A total of 200 bacterial isolates of actinobacteria was isolated. All isolates were screened for phosphate-solubilizing activity on Pikovskaya's agar. Thirteen different isolates exhibiting maximum formation of halos (zone of solubilization) around the bacterial colonies were selected for quantitative estimations of P-solubilization. Quantitative estimations for P-solubilization were analyzed for up to 10 days at intervals of 24 h. Maximum solubilization from 89.3 ± 3.1 to 164.1 ± 4.1 μg ml(-1) was observed after 6 days of incubation in six of all isolates, while the isolate NII-1020 showed maximum P-solubilization. The increase in solubilization coincided with the drop in pH. Many of these species showed wide range of tolerance to temperature, pH, and salt concentrations. Further, 16S rRNA gene sequence analyses were carried to identify the bacterial groups which are actively solubilized phosphate in vitro. Gene sequencing results reveal that all isolates were clustered into six different actinobacterial genera: Streptomyces, Microbacterium, Angustibacter, Kocuria, Isoptericola, and Agromyces. The presence of phosphate-solubilizing microorganisms and their ability to solubilize phosphate were indicative of the important role played by bacteria in the biogeochemical cycle of phosphorus and the plant growth in coastal ecosystems.
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Efficient Estimation in Heteroscedastic Varying Coefficient Models
Directory of Open Access Journals (Sweden)
Chuanhua Wei
2015-07-01
Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.
Modeling Fuel Efficiency: MPG or GPHM?
Bartkovich, Kevin G.
2013-01-01
The standard for measuring fuel efficiency in the U.S. has been miles per gallon (mpg). However, the Environmental Protection Agency's (EPA) switch in rating fuel efficiency from miles per gallon to gallons per hundred miles with the 2013 model-year cars leads to interesting and relevant mathematics with real-world connections. By modeling…
Tenure Profiles and Efficient Separation in a Stochastic Productivity Model
Buhai, I.S.; Teulings, C.N.
2014-01-01
We develop a theoretical model based on efficient bargaining, where both log outside productivity and log productivity in the current job follow a random walk. This setting allows the application of real option theory. We derive the efficient worker-firm separation rule. We show that wage data from
Modelling fluidized catalytic cracking unit stripper efficiency
Directory of Open Access Journals (Sweden)
García-Dopico M.
2015-01-01
Full Text Available This paper presents our modelling of a FCCU stripper, following our earlier research. This model can measure stripper efficiency against the most important variables: pressure, temperature, residence time and steam flow. Few models in the literature model the stripper and usually they do against only one variable. Nevertheless, there is general agreement on the importance of the stripper in the overall process, and the fact that there are few models maybe it is due to the difficulty to obtain a comprehensive model. On the other hand, the proposed model does use all the variables of the stripper, calculating efficiency on the basis of steam flow, pressure, residence time and temperature. The correctness of the model is then analysed, and we examine several possible scenarios, like decreasing the steam flow, which is achieved by increasing the temperature in the stripper.
Efficient Modelling and Generation of Markov Automata
Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle; Koutny, M.; Ulidowski, I.
2012-01-01
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M
Statistical modelling for ship propulsion efficiency
DEFF Research Database (Denmark)
Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole
2012-01-01
This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...
Modeling plasmonic efficiency enhancement in organic photovoltaics.
Taff, Y; Apter, B; Katz, E A; Efron, U
2015-09-10
Efficiency enhancement of bulk heterojunction (BHJ) organic solar cells by means of the plasmonic effect is investigated by using finite-difference time-domain (FDTD) optical simulations combined with analytical modeling of exciton dissociation and charge transport efficiencies. The proposed method provides an improved analysis of the cell performance compared to previous FDTD studies. The results of the simulations predict an 11.8% increase in the cell's short circuit current with the use of Ag nano-hexagons.
Efficient numerical integrators for stochastic models
De Fabritiis, G; Español, P; Coveney, P V
2006-01-01
The efficient simulation of models defined in terms of stochastic differential equations (SDEs) depends critically on an efficient integration scheme. In this article, we investigate under which conditions the integration schemes for general SDEs can be derived using the Trotter expansion. It follows that, in the stochastic case, some care is required in splitting the stochastic generator. We test the Trotter integrators on an energy-conserving Brownian model and derive a new numerical scheme for dissipative particle dynamics. We find that the stochastic Trotter scheme provides a mathematically correct and easy-to-use method which should find wide applicability.
Efficient Finite Element Modelling of Elastodynamic Scattering
Velichko, A.; Wilcox, P. D.
2010-02-01
A robust and efficient technique for predicting the complete scattering behavior for an arbitrarily-shaped defect is presented that can be implemented in a commercial FE package. The spatial size of the modeling domain around the defect is as small as possible to minimize computational expense and a minimum number of models are executed. Example results for 2D and 3D scattering in isotropic material and guided wave scattering are presented.
Internal quantum efficiency modeling of silicon photodiodes.
Gentile, T R; Brown, S W; Lykke, K R; Shaw, P S; Woodward, J T
2010-04-01
Results are presented for modeling of the shape of the internal quantum efficiency (IQE) versus wavelength for silicon photodiodes in the 400 nm to 900 nm wavelength range. The IQE data are based on measurements of the external quantum efficiencies of three transmission optical trap detectors using an extensive set of laser wavelengths, along with the transmittance of the traps. We find that a simplified version of a previously reported IQE model fits the data with an accuracy of better than 0.01%. These results provide an important validation of the National Institute of Standards and Technology (NIST) spectral radiant power responsivity scale disseminated through the NIST Spectral Comparator Facility, as well as those scales disseminated by other National Metrology Institutes who have employed the same model.
Directory of Open Access Journals (Sweden)
Gyöngyi Munkácsy
2016-01-01
Full Text Available No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal–Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E−06. Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E−04. There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.
ACO model should encourage efficient care delivery.
Toussaint, John; Krueger, David; Shortell, Stephen M; Milstein, Arnold; Cutler, David M
2015-09-01
The independent Office of the Actuary for CMS certified that the Pioneer ACO model has met the stringent criteria for expansion to a larger population. Significant savings have accrued and quality targets have been met, so the program as a whole appears to be working. Ironically, 13 of the initial 32 enrollees have left. We attribute this to the design of the ACO models which inadequately support efficient care delivery. Using Bellin-ThedaCare Healthcare Partners as an example, we will focus on correctible flaws in four core elements of the ACO payment model: finance spending and targets, attribution, and quality performance.
Connor, E E; Hutchison, J L; Norman, H D; Olson, K M; Van Tassell, C P; Leith, J M; Baldwin, R L
2013-08-01
Improved feed efficiency is a primary goal in dairy production to reduce feed costs and negative impacts of production on the environment. Estimates for efficiency of feed conversion to milk production based on residual feed intake (RFI) in dairy cattle are limited, primarily due to a lack of individual feed intake measurements for lactating cows. Feed intake was measured in Holstein cows during the first 90 d of lactation to estimate the heritability and repeatability of RFI, minimum test duration for evaluating RFI in early lactation, and its association with other production traits. Data were obtained from 453 lactations (214 heifers and 239 multiparous cows) from 292 individual cows from September 2007 to December 2011. Cows were housed in a free-stall barn and monitored for individual daily feed consumption using the GrowSafe 4000 System (GrowSafe Systems, Ltd., Airdrie, AB, Canada). Animals were fed a total mixed ration 3 times daily, milked twice daily, and weighed every 10 to 14 d. Milk yield was measured at each milking. Feed DM percentage was measured daily, and nutrient composition was analyzed from a weekly composite. Milk composition was analyzed weekly, alternating between morning and evening milking periods. Estimates of RFI were determined as the difference between actual energy intake and predicted intake based on a linear model with fixed effects of parity (1, 2, ≥ 3) and regressions on metabolic BW, ADG, and energy-corrected milk yield. Heritability was estimated to be moderate (0.36 ± 0.06), and repeatability was estimated at 0.56 across lactations. A test period through 53 d in milk (DIM) explained 81% of the variation provided by a test through 90 DIM. Multiple regression analysis indicated that high efficiency was associated with less time feeding per day and slower feeding rate, which may contribute to differences in RFI among cows. The heritability and repeatability of RFI suggest an opportunity to improve feed efficiency through genetic
An Efficient Multitask Scheduling Model for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Hongsheng Yin
2014-01-01
Full Text Available The sensor nodes of multitask wireless network are constrained in performance-driven computation. Theoretical studies on the data processing model of wireless sensor nodes suggest satisfying the requirements of high qualities of service (QoS of multiple application networks, thus improving the efficiency of network. In this paper, we present the priority based data processing model for multitask sensor nodes in the architecture of multitask wireless sensor network. The proposed model is deduced with the M/M/1 queuing model based on the queuing theory where the average delay of data packets passing by sensor nodes is estimated. The model is validated with the real data from the Huoerxinhe Coal Mine. By applying the proposed priority based data processing model in the multitask wireless sensor network, the average delay of data packets in a sensor nodes is reduced nearly to 50%. The simulation results show that the proposed model can improve the throughput of network efficiently.
Directory of Open Access Journals (Sweden)
Yu Sun
2016-01-01
Full Text Available In managerial application, data envelopment analysis (DEA is used by numerous studies to evaluate performances and solve the allocation problem. As the problem of infrastructure investment becomes more and more important in Chinese cities, it is of vital necessity to evaluate the investment efficiency and assign the fund. In practice, there are competitions among cities due to the scarcity of investment funds. However, the traditional DEA model is a pure self-evaluation model without considering the impacts of the other decision-making units (DMUs. Even though using the cross-efficiency model can figure out the best multiplier bundle for the unit and other DMUs, the solution is not unique. Therefore, this paper introduces the game theory into DEA cross-efficiency model to evaluate the infrastructure investment efficiency when cities compete with each other. In this paper, we analyze the case involving 30 provincial capital cities of China. And the result shows that the approach can accomplish a unique and efficient solution for each city (DMU after the investment fund is allocated as an input variable.
Optimisation Modelling of Efficiency of Enterprise Restructuring
Directory of Open Access Journals (Sweden)
Yefimova Hanna V.
2014-03-01
Full Text Available The article considers issues of optimisation of the use of resources directed at restructuring of a shipbuilding enterprise, which is the main prerequisite of its efficiency. Restructuring is considered as a process of complex and interconnected change in the structure of assets, liabilities, enterprise functions, initiated by dynamic environment, which is based on the strategic concept of its development and directed at increase of efficiency of its activity, which is expressed in the growth of cost. The task of making a decision to restructure a shipbuilding enterprise and selection of a specific restructuring project refers to optimisation tasks of prospective planning. Enterprise resources that are allocated for restructuring serve as constraints of the mathematical model. Main criteria of optimisation are maximisation of pure discounted income or minimisation of expenditures on restructuring measures. The formed optimisation model is designed for assessment of volumes of attraction of own and borrowed funds for restructuring. Imitation model ensures development of cash flows. The task solution is achieved on the basis of the complex of interrelated optimisation and imitation models and procedures on formation, selection and co-ordination of managerial decisions.
Efficient Model-Based Diagnosis Engine
Fijany, Amir; Vatan, Farrokh; Barrett, Anthony; James, Mark; Mackey, Ryan; Williams, Colin
2009-01-01
An efficient diagnosis engine - a combination of mathematical models and algorithms - has been developed for identifying faulty components in a possibly complex engineering system. This model-based diagnosis engine embodies a twofold approach to reducing, relative to prior model-based diagnosis engines, the amount of computation needed to perform a thorough, accurate diagnosis. The first part of the approach involves a reconstruction of the general diagnostic engine to reduce the complexity of the mathematical-model calculations and of the software needed to perform them. The second part of the approach involves algorithms for computing a minimal diagnosis (the term "minimal diagnosis" is defined below). A somewhat lengthy background discussion is prerequisite to a meaningful summary of the innovative aspects of the present efficient model-based diagnosis engine. In model-based diagnosis, the function of each component and the relationships among all the components of the engineering system to be diagnosed are represented as a logical system denoted the system description (SD). Hence, the expected normal behavior of the engineering system is the set of logical consequences of the SD. Faulty components lead to inconsistencies between the observed behaviors of the system and the SD (see figure). Diagnosis - the task of finding faulty components - is reduced to finding those components, the abnormalities of which could explain all the inconsistencies. The solution of the diagnosis problem should be a minimal diagnosis, which is a minimal set of faulty components. A minimal diagnosis stands in contradistinction to the trivial solution, in which all components are deemed to be faulty, and which, therefore, always explains all inconsistencies.
Energy Technology Data Exchange (ETDEWEB)
Hampf, Benjamin
2011-08-15
In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.
Ma, Xiaohua
2011-03-01
Achiral nonlinear optical (NLO) chromophores 1,3-diazaazulene derivatives, 2-(4â€-aminophenyl)-6-nitro-1,3-diazaazulene (APNA) and 2-(4â€-N,N-diphenylaminophenyl)-6-nitro-1,3-diazaazulene (DPAPNA), were synthesized with high yield. Despite the moderate static first hyperpolarizabilities (β0) for both APNA [(136 Â± 5) Ã - 10-30 esu] and DPAPNA [(263 Â± 20) Ã - 10-30 esu], only APNA crystal shows a powder efficiency of second harmonic generation (SHG) of 23 times that of urea. It is shown that the APNA crystallization driven cooperatively by the strong H-bonding network and the dipolar electrostatic interactions falls into the noncentrosymmetric P2 12121 space group, and that the helical supramolecular assembly is solely responsible for the efficient SHG response. To the contrary, the DPAPNA crystal with centrosymmetric P-1 space group is packed with antiparalleling dimmers, and is therefore completely SHG-inactive. 1,3-Diazaazulene derivatives are suggested to be potent building blocks for SHG-active chiral crystals, which are advantageous in high thermal stability, excellent near-infrared transparency and high degree of designing flexibility. © 2011 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys, 2011 Optical crystals based on 1,3-diazaazulene derivatives are reported as the first example of organic nonlinear optical crystal whose second harmonic generation activity is found to originate solely from the chirality of their helical supramolecular orientation. The strong H-bond network forming between adjacent choromophores is found to act cooperatively with dipolar electrostatic interactions in driving the chiral crystallization of this material. Copyright © 2011 Wiley Periodicals, Inc.
Satellite-based terrestrial production efficiency modeling
Directory of Open Access Journals (Sweden)
Obersteiner Michael
2009-09-01
Full Text Available Abstract Production efficiency models (PEMs are based on the theory of light use efficiency (LUE which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP monitoring. The objectives of this review are as follows: 1 to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS identified in the literature; 2 to review each model to determine potential improvements to the general PEM methodology; 3 to review the related literature on satellite-based gross primary productivity (GPP and NPP modeling for additional possibilities for improvement; and 4 based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra; there is an urgent need for
Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction
Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R
2011-01-01
Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.
Resonant circuit model for efficient metamaterial absorber.
Sellier, Alexandre; Teperik, Tatiana V; de Lustrac, André
2013-11-04
The resonant absorption in a planar metamaterial is studied theoretically. We present a simple physical model describing this phenomenon in terms of equivalent resonant circuit. We discuss the role of radiative and dissipative damping of resonant mode supported by a metamaterial in the formation of absorption spectra. We show that the results of rigorous calculations of Maxwell equations can be fully retrieved with simple model describing the system in terms of equivalent resonant circuit. This simple model allows us to explain the total absorption effect observed in the system on a common physical ground by referring it to the impedance matching condition at the resonance.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Beattie, Bradley J; Thorek, Daniel L J; Schmidtlein, Charles R; Pentlow, Keith S; Humm, John L; Hielscher, Andreas H
2012-01-01
There has been recent and growing interest in applying Cerenkov radiation (CR) for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Efficient Modelling and Generation of Markov Automata
Timmer, Mark; Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette
2012-01-01
This presentation introduces a process-algebraic framework with data for modelling and generating Markov automata. We show how an existing linearisation procedure for process-algebraic representations of probabilistic automata can be reused to transform systems in our new framework to a special
Efficient 3D scene modeling and mosaicing
Nicosevici, Tudor
2013-01-01
This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped. In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms. Also, towards dev...
An Efficient Hydrodynamic Model for Surface Waves
Institute of Scientific and Technical Information of China (English)
WANG Kun; JIN Sheng; LU Gang
2009-01-01
In the present study,a semi-implicit finite difference model for non-bydrostatic,free-surface flows is analyzed and discussed.The governing equations are the three-dimensional free-surface Reynolds-averaged Navier-Stokes equations defined on a general,irregular domain of arbitrary scale.At outflow,a combination of a sponge layer technique and a radiation boundary condition is applied to minimize wave reflection.The equations are solved with the fractional step method where the hydrostatic pressure component is determined first,while the non-hydrostatic component of the pressure is computed from the pressure Poisson equation in which the coefficient matrix is positive definite and symmetric.The advectiou and horizontal viscosity terms are discretized by use of a semi-Lagrangian approach.The resulting model is computationally efficient and unrestricted to the CFL condition.The developed model is verified against analytical solutions and experimental data,with excellent agreement.
Directory of Open Access Journals (Sweden)
Xin Chen
2013-07-01
Full Text Available Background: Melanoma is considered as one of the most aggressive and deadliest cancers and current targeted therapies of melanoma often suffer limited efficacy or drug resistance. Discovery of novel multikinase inhibitors as anti-melanoma drug candidates is still needed. Methods: In this investigation, we assessed the in vitro and in vivo anti-melanoma activities of SC-535, which is a novel small molecule multikinase inhibitor discovered by us recently. We analyzed inhibitory effects of SC-535 on various melanoma cell lines and human umbilical vascular endothelial cells (HUVEC in vitro. Tumor xenografts in athymic mice were used to examine the in vivo activity of SC-535. Results: SC-535 could efficiently inhibit vascular endothelial growth factor receptor (VEGFR 1/2/3, B-RAF, and C-RAF kinases. It showed significant antiangiogenic potencies both in vitro and in vivo and considerable anti-proliferative ability against several melanoma cell lines. Oral administration of SC-535 resulted in dose-dependent suppression of tumor growth in WM2664 and C32 xenograft mouse models. Studies of mechanisms of action indicated that SC-535 suppressed the tumor angiogenesis and induced G2/M phase cell cycle arrest in human melanoma cells. SC-535 possesses favorable pharmacokinetic properties. Conclusion: All of these results support SC-535 as a potential candidate for clinical studies in patients with melanoma.
A Model Lesson: Finland Shows Us What Equal Opportunity Looks Like
Sahlberg, Pasi
2012-01-01
International indicators show that Finland has one of the most educated citizenries in the world, provides educational opportunities in an egalitarian manner, and makes efficient use of resources. But at the beginning of the 1990s, education in Finland was nothing special in international terms. The performance of Finnish students on international…
Efficiency of a statistical transport model for turbulent particle dispersion
Litchford, Ron J.; Jeng, San-Mou
1992-01-01
In developing its theory for turbulent dispersion transport, the Litchford and Jeng (1991) statistical transport model for turbulent particle dispersion took a generalized approach in which the perturbing influence of each turbulent eddy on consequent interactions was transported through all subsequent eddies. Nevertheless, examinations of this transport relation shows it to be able to decay rapidly: this implies that additional computational efficiency may be obtained via truncation of unneccessary transport terms. Attention is here given to the criterion for truncation, as well as to expected efficiency gains.
Lünsmann, Vanessa; Kappelmeyer, Uwe; Taubert, Anja; Nijenhuis, Ivonne; von Bergen, Martin; Heipieper, Hermann J; Müller, Jochen A; Jehmlich, Nico
2016-07-15
Constructed wetlands (CWs) are successfully applied for the treatment of waters contaminated with aromatic compounds. In these systems, plants provide oxygen and root exudates to the rhizosphere and thereby stimulate microbial degradation processes. Root exudation of oxygen and organic compounds depends on photosynthetic activity and thus may show day-night fluctuations. While diurnal changes in CW effluent composition have been observed, information on respective fluctuations of bacterial activity are scarce. We investigated microbial processes in a CW model system treating toluene-contaminated water which showed diurnal oscillations of oxygen concentrations using metaproteomics. Quantitative real-time PCR was applied to assess diurnal expression patterns of genes involved in aerobic and anaerobic toluene degradation. We observed stable aerobic toluene turnover by Burkholderiales during the day and night. Polyhydroxyalkanoate synthesis was upregulated in these bacteria during the day, suggesting that they additionally feed on organic root exudates while reutilizing the stored carbon compounds during the night via the glyoxylate cycle. Although mRNA copies encoding the anaerobic enzyme benzylsuccinate synthase (bssA) were relatively abundant and increased slightly at night, the corresponding protein could not be detected in the CW model system. Our study provides insights into diurnal patterns of microbial processes occurring in the rhizosphere of an aquatic ecosystem. Constructed wetlands are a well-established and cost-efficient option for the bioremediation of contaminated waters. While it is commonly accepted knowledge that the function of CWs is determined by the interplay of plants and microorganisms, the detailed molecular processes are considered a black box. Here, we used a well-characterized CW model system treating toluene-contaminated water to investigate the microbial processes influenced by diurnal plant root exudation. Our results indicated stable
A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification
Bigoni, Daniele; Eskilsson, Claes
2014-01-01
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...
Multitask Efficiencies in the Decision Tree Model
Drucker, Andrew
2008-01-01
In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection $F = \\{f_i\\}$ of functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the $f_i$ are evaluated on a \\textit{common} input. Fixing some model of computational cost, let $C_F(X): \\{0, 1\\}^l \\to \\mathbf{R}$ give the cost of computing the subcollection $\\{f_i(x): X_i = 1\\}$, on common input $x$. What constraints do the functions $C_F(X)$ obey, when $F$ is chosen freely? $C_F(X)$ will, for reasonable models, obey nonnegativity, monotonicity, and subadditivity. We show that, in the deterministic, adaptive query model, these are `essentially' the only constraints: for any function $C(X)$ obeying these properties and any $\\epsilon > 0$, there exists a family $F$ of boolean functions and a $T > 0$ such that for all $X \\in \\{0, 1\\}^l$, \\...
Harvey, J.A.; Wagenaar, R.; Bezemer, T.M.
2009-01-01
Parasitoid wasps are highly efficient organisms at utilizing and assimilating limited resources from their hosts. This study explores interactions over three trophic levels, from the third (primary parasitoid) to the fourth (secondary parasitoid) and terminating in the fifth (tertiary parasitoid). H
A model comparison approach shows stronger support for economic models of fertility decline.
Shenk, Mary K; Towner, Mary C; Kress, Howard C; Alam, Nurul
2013-05-14
The demographic transition is an ongoing global phenomenon in which high fertility and mortality rates are replaced by low fertility and mortality. Despite intense interest in the causes of the transition, especially with respect to decreasing fertility rates, the underlying mechanisms motivating it are still subject to much debate. The literature is crowded with competing theories, including causal models that emphasize (i) mortality and extrinsic risk, (ii) the economic costs and benefits of investing in self and children, and (iii) the cultural transmission of low-fertility social norms. Distinguishing between models, however, requires more comprehensive, better-controlled studies than have been published to date. We use detailed demographic data from recent fieldwork to determine which models produce the most robust explanation of the rapid, recent demographic transition in rural Bangladesh. To rigorously compare models, we use an evidence-based statistical approach using model selection techniques derived from likelihood theory. This approach allows us to quantify the relative evidence the data give to alternative models, even when model predictions are not mutually exclusive. Results indicate that fertility, measured as either total fertility or surviving children, is best explained by models emphasizing economic factors and related motivations for parental investment. Our results also suggest important synergies between models, implicating multiple causal pathways in the rapidity and degree of recent demographic transitions.
Roszniowski, Bartosz; Latka, Agnieszka; Maciejewska, Barbara; Vandenheuvel, Dieter; Olszak, Tomasz; Briers, Yves; Holt, Giles S; Valvano, Miguel A; Lavigne, Rob; Smith, Darren L; Drulis-Kawa, Zuzanna
2017-02-01
Burkholderia phage AP3 (vB_BceM_AP3) is a temperate virus of the Myoviridae and the Peduovirinae subfamily (P2likevirus genus). This phage specifically infects multidrug-resistant clinical Burkholderia cenocepacia lineage IIIA strains commonly isolated from cystic fibrosis patients. AP3 exhibits high pairwise nucleotide identity (61.7 %) to Burkholderia phage KS5, specific to the same B. cenocepacia host, and has 46.7-49.5 % identity to phages infecting other species of Burkholderia. The lysis cassette of these related phages has a similar organization (putative antiholin, putative holin, endolysin, and spanins) and shows 29-98 % homology between specific lysis genes, in contrast to Enterobacteria phage P2, the hallmark phage of this genus. The AP3 and KS5 lysis genes have conserved locations and high amino acid sequence similarity. The AP3 bacteriophage particles remain infective up to 5 h at pH 4-10 and are stable at 60 °C for 30 min, but are sensitive to chloroform, with no remaining infective particles after 24 h of treatment. AP3 lysogeny can occur by stable genomic integration and by pseudo-lysogeny. The lysogenic bacterial mutants did not exhibit any significant changes in virulence compared to wild-type host strain when tested in the Galleria mellonella moth wax model. Moreover, AP3 treatment of larvae infected with B. cenocepacia revealed a significant increase (P < 0.0001) in larvae survival in comparison to AP3-untreated infected larvae. AP3 showed robust lytic activity, as evidenced by its broad host range, the absence of increased virulence in lysogenic isolates, the lack of bacterial gene disruption conditioned by bacterial tRNA downstream integration site, and the absence of detected toxin sequences. These data suggest that the AP3 phage is a promising potent agent against bacteria belonging to the most common B. cenocepacia IIIA lineage strains.
Energy Technology Data Exchange (ETDEWEB)
Usherwood, James R [Structure and Motion Lab., Royal Veterinary College, North Mymms, Hatfield, Herts AL9 7TA (United Kingdom)], E-mail: jusherwood@rvc.ac.uk
2009-03-01
Predictions from aerodynamic theory often match biological observations very poorly. Many insects and several bird species habitually hover, frequently flying at low advance ratios. Taking helicopter-based aerodynamic theory, wings functioning predominantly for hovering, even for quite small insects, should operate at low angles of attack. However, insect wings operate at very high angles of attack during hovering; reduction in angle of attack should result in considerable energetic savings. Here, I consider the possibility that selection of kinematics is constrained from being aerodynamically optimal due to the inertial power requirements of flapping. Potential increases in aerodynamic efficiency with lower angles of attack during hovering may be outweighed by increases in inertial power due to the associated increases in flapping frequency. For simple hovering, traditional rotary-winged helicopter-like micro air vehicles would be more efficient than their flapping biomimetic counterparts. However, flapping may confer advantages in terms of top speed and manoeuvrability. If flapping-winged micro air vehicles are required to hover or loiter more efficiently, dragonflies and mayflies suggest biomimetic solutions.
Efficient family-based model checking via variability abstractions
DEFF Research Database (Denmark)
Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus
2016-01-01
variational models using the standard version of (single-system) Spin. The variability abstractions are first defined as Galois connections on semantic domains. We then show how to use them for defining abstract family-based model checking, where a variability model is replaced with an abstract version of it......Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... of related systems), specialized family-based model checking algorithms allow efficient verification of multiple variants, simultaneously, in a single run. These algorithms, implemented in a tool Snip, scale much better than ``the brute force'' approach, where all individual systems are verified using...
Efficient Algorithms for Parsing the DOP Model
Goodman, J
1996-01-01
Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using the optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an e...
A. Zerga; B. Benyoucef; J.-P. Charles
1998-01-01
Single and double exponential models are confronted to determine the most adapted model for optimization of solar cells efficiency. It is shown that the single exponential model (SEM) presents some insufficiencies for efficiency optimization. The interest of the double exponential model to optimize the efficiency and to achieve an adequate simulation of the operation of solar cells is demonstrated by means of I-V characteristics plotting.
Models for efficient integration of solar energy
DEFF Research Database (Denmark)
Bacher, Peder
. Finally a procedure for identication of a suitable model for the heat dynamics of a building is presented. The applied models are greybox model based on stochastic dierential equations and the identication is carried out with likelihood ratio tests. The models can be used for providing detailed...
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Directory of Open Access Journals (Sweden)
Dolores Pérez
Full Text Available BACKGROUND: Among extremophiles, halophiles are defined as microorganisms adapted to live and thrive in diverse extreme saline environments. These extremophilic microorganisms constitute the source of a number of hydrolases with great biotechnological applications. The interest to use extremozymes from halophiles in industrial applications is their resistance to organic solvents and extreme temperatures. Marinobacter lipolyticus SM19 is a moderately halophilic bacterium, isolated previously from a saline habitat in South Spain, showing lipolytic activity. METHODS AND FINDINGS: A lipolytic enzyme from the halophilic bacterium Marinobacter lipolyticus SM19 was isolated. This enzyme, designated LipBL, was expressed in Escherichia coli. LipBL is a protein of 404 amino acids with a molecular mass of 45.3 kDa and high identity to class C β-lactamases. LipBL was purified and biochemically characterized. The temperature for its maximal activity was 80°C and the pH optimum determined at 25°C was 7.0, showing optimal activity without sodium chloride, while maintaining 20% activity in a wide range of NaCl concentrations. This enzyme exhibited high activity against short-medium length acyl chain substrates, although it also hydrolyzes olive oil and fish oil. The fish oil hydrolysis using LipBL results in an enrichment of free eicosapentaenoic acid (EPA, but not docosahexaenoic acid (DHA, relative to its levels present in fish oil. For improving the stability and to be used in industrial processes LipBL was immobilized in different supports. The immobilized derivatives CNBr-activated Sepharose were highly selective towards the release of EPA versus DHA. The enzyme is also active towards different chiral and prochiral esters. Exposure of LipBL to buffer-solvent mixtures showed that the enzyme had remarkable activity and stability in all organic solvents tested. CONCLUSIONS: In this study we isolated, purified, biochemically characterized and immobilized a
Effective and efficient model clone detection
DEFF Research Database (Denmark)
Störrle, Harald
2015-01-01
Code clones are a major source of software defects. Thus, it is likely that model clones (i.e., duplicate fragments of models) have a significant negative impact on model quality, and thus, on any software created based on those models, irrespective of whether the software is generated fully...... automatically (“MDD-style”) or hand-crafted following the blueprint defined by the model (“MBSD-style”). Unfortunately, however, model clones are much less well studied than code clones. In this paper, we present a clone detection algorithm for UML domain models. Our approach covers a much greater variety...... of model types than existing approaches while providing high clone detection rates at high speed....
Panáček, Aleš; Smékalová, Monika; Kilianová, Martina; Prucek, Robert; Bogdanová, Kateřina; Večeřová, Renata; Kolář, Milan; Havrdová, Markéta; Płaza, Grażyna Anna; Chojniak, Joanna; Zbořil, Radek; Kvítek, Libor
2015-12-28
The resistance of bacteria towards traditional antibiotics currently constitutes one of the most important health care issues with serious negative impacts in practice. Overcoming this issue can be achieved by using antibacterial agents with multimode antibacterial action. Silver nano-particles (AgNPs) are one of the well-known antibacterial substances showing such multimode antibacterial action. Therefore, AgNPs are suitable candidates for use in combinations with traditional antibiotics in order to improve their antibacterial action. In this work, a systematic study quantifying the synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs against Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus was performed. Employing the microdilution method as more suitable and reliable than the disc diffusion method, strong synergistic effects were shown for all tested antibiotics combined with AgNPs at very low concentrations of both antibiotics and AgNPs. No trends were observed for synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs, indicating non-specific synergistic effects. Moreover, a very low amount of silver is needed for effective antibacterial action of the antibiotics, which represents an important finding for potential medical applications due to the negligible cytotoxic effect of AgNPs towards human cells at these concentration levels.
Directory of Open Access Journals (Sweden)
Aleš Panáček
2015-12-01
Full Text Available The resistance of bacteria towards traditional antibiotics currently constitutes one of the most important health care issues with serious negative impacts in practice. Overcoming this issue can be achieved by using antibacterial agents with multimode antibacterial action. Silver nano-particles (AgNPs are one of the well-known antibacterial substances showing such multimode antibacterial action. Therefore, AgNPs are suitable candidates for use in combinations with traditional antibiotics in order to improve their antibacterial action. In this work, a systematic study quantifying the synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs against Escherichia coli, Pseudomonas aeruginosa and Staphylococcus aureus was performed. Employing the microdilution method as more suitable and reliable than the disc diffusion method, strong synergistic effects were shown for all tested antibiotics combined with AgNPs at very low concentrations of both antibiotics and AgNPs. No trends were observed for synergistic effects of antibiotics with different modes of action and different chemical structures in combination with AgNPs, indicating non-specific synergistic effects. Moreover, a very low amount of silver is needed for effective antibacterial action of the antibiotics, which represents an important finding for potential medical applications due to the negligible cytotoxic effect of AgNPs towards human cells at these concentration levels.
Quantitative modeling of Cerenkov light production efficiency from medical radionuclides.
Directory of Open Access Journals (Sweden)
Bradley J Beattie
Full Text Available There has been recent and growing interest in applying Cerenkov radiation (CR for biological applications. Knowledge of the production efficiency and other characteristics of the CR produced by various radionuclides would help in accessing the feasibility of proposed applications and guide the choice of radionuclides. To generate this information we developed models of CR production efficiency based on the Frank-Tamm equation and models of CR distribution based on Monte-Carlo simulations of photon and β particle transport. All models were validated against direct measurements using multiple radionuclides and then applied to a number of radionuclides commonly used in biomedical applications. We show that two radionuclides, Ac-225 and In-111, which have been reported to produce CR in water, do not in fact produce CR directly. We also propose a simple means of using this information to calibrate high sensitivity luminescence imaging systems and show evidence suggesting that this calibration may be more accurate than methods in routine current use.
Goldhaber, Dan; Chaplin, Duncan Dunbar
2015-01-01
In an influential paper, Jesse Rothstein (2010) shows that standard value-added models (VAMs) suggest implausible and large future teacher effects on past student achievement. This is the basis of a falsification test that "appears" to indicate bias in typical VAM estimates of teacher contributions to student learning on standardized…
Model Penilaian dan Pemilihan Trade Show Bagi Industri Kreatif di Sektor Mode
Directory of Open Access Journals (Sweden)
Afrin Fauzya Rizana
2017-07-01
Full Text Available The article identifies the criteria for choosing a trade show and develops a basic model of exhibition selection for creative industry players before deciding to participate in a trade show. It is necessary to ensure that expenses in terms of business, money, and time, will be worth the results. Based on literature review and interviews, six criteria were used, namely location, booth position, organizational reputation, cost estimation, prestige, and reputation of other participants. After selection criteria are identified, then calculations are performed to measure the criteria weight by using the AHP approach. Based on weight calculations, it was found that booth positions had the highest importance weight, followed by trade show location, organizers reputation, cost estimation, prestige and reputation of other participants. The weight value is then used to calculate the trade show's prediction value. The predicted value generated from the model is then compared to the value of the past data. The model has an accuracy rate of 89% and does not have a significant difference between the value generated by the model and the value of the past data.
An Occupant Behavior Model for Building Energy Efficiency and Safety
Pan, L. L.; Chen, T.; Jia, Q. S.; Yuan, R. X.; Wang, H. T.; Ding, R.
2010-05-01
An occupant behavior model is suggested to improve building energy efficiency and safety. This paper provides a generic outline of the model, which includes occupancy behavior abstraction, model framework and primary structure, input and output, computer simulation results as well as summary and outlook. Using information technology, now it's possible to collect large amount of information of occupancy. Yet this can only provide partial and historical information, so it's important to develop a model to have full view of the researched building as well as prediction. We used the infrared monitoring system which is set at the front door of the Low Energy Demo Building (LEDB) at Tsinghua University in China, to provide the time variation of the total number of occupants in the LEDB building. This information is used as input data for the model. While the RFID system is set on the 1st floor, which provides the time variation of the occupants' localization in each region. The collected data are used to validate the model. The simulation results show that this presented model provides a feasible framework to simulate occupants' behavior and predict the time variation of the number of occupants in the building. Further development and application of the model is also discussed.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...
Information, complexity and efficiency: The automobile model
Energy Technology Data Exchange (ETDEWEB)
Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Warren, Jessica; Owen, A Rhys; Glanvill, Amy; Francis, Asher; Maboni, Grazieli; Nova, Rodrigo J; Wapenaar, Wendela; Rees, Catherine; Tötemeyer, Sabine
2015-08-31
Listerial keratoconjunctivitis ('silage eye') is a wide spread problem in ruminants causing economic losses to farmers and impacts negatively on animal welfare. It results from direct entry of Listeria monocytogenes into the eye, often following consumption of contaminated silage. An isolation protocol for bovine conjunctival swabbing was developed and used to sample both infected and healthy eyes bovine eyes (n=46). L. monocytogenes was only isolated from one healthy eye sample, and suggests that this organism can be present without causing disease. To initiate a study of this disease, an infection model was developed using isolated conjunctiva explants obtained from cattle eyes post slaughter. Conjunctiva were cultured and infected for 20 h with a range of L. monocytogenes isolates (n=11), including the healthy bovine eye isolate and also strains isolated from other bovine sources, such as milk or clinical infections. Two L. monocytogenes isolates (one from a healthy eye and one from a cattle abortion) were markedly less able to invade conjunctiva explants, but one of those was able to efficiently infect Caco2 cells indicating that it was fully virulent. These two isolates were also significantly more sensitive to lysozyme compared to most other isolates tested, suggesting that lysozyme resistance is an important factor when infecting bovine conjunctiva. In conclusion, we present the first bovine conjunctiva explant model for infection studies and demonstrate that clinical L. monocytogenes isolates from cases of bovine keratoconjunctivitis are able to infect these tissues.
EFFICIENCY MODELS OF THE CROSS-FUNCTIONAL TEAMS
Directory of Open Access Journals (Sweden)
Dinca Laura
2013-04-01
Full Text Available Cross-functional teams represent a characteristic of the new organization form of the enterprises, imposed by the complexity of the present environment. Regardless their kind, the new organization forms are based on the cross-functional teams as an innovation source and reunion of competences. Only by the medium of the cross-functional teams, the new organizational partnerships obtain flexibility in their actions and fastness in their reactions. By their features, the cross-functional teams should be differentiated from any kind of work-group. What makes them different is the common effort carried to reach the common objective of the team. The present paper presents two research models for the efficiency of the cross-functional teams, respectively input-process-output and input-mediator-outcome models. These show the factors on which cross-functional teams’ managers should action to obtain superior economic performances.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...
Efficient Modelling Methodology for Reconfigurable Underwater Robots
DEFF Research Database (Denmark)
Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid
2016-01-01
This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...
Efficient Modelling Methodology for Reconfigurable Underwater Robots
DEFF Research Database (Denmark)
Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid
2016-01-01
This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....
Modelling and analysis of solar cell efficiency distributions
Wasmer, Sven; Greulich, Johannes
2017-08-01
We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.
Efficient CSL Model Checking Using Stratification
DEFF Research Database (Denmark)
Zhang, Lijun; Jansen, David N.; Nielson, Flemming;
2012-01-01
For continuous-time Markov chains, the model-checking problem with respect to continuous-time stochastic logic (CSL) has been introduced and shown to be decidable by Aziz, Sanwal, Singhal and Brayton in 1996 [ 1, 2]. Their proof can be turned into an approximation algorithm with worse than expone...
Efficient Modelling Methodology for Reconfigurable Underwater Robots
DEFF Research Database (Denmark)
Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid
2016-01-01
This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF...
Business models for material efficiency services. Conceptualization and application
Energy Technology Data Exchange (ETDEWEB)
Halme, Minna; Anttonen, Markku; Kuisma, Mika; Kontoniemi, Nea [Helsinki School of Economics, Department of Marketing and Management, P.O. Box 1210, 00101 Helsinki (Finland); Heino, Erja [University of Helsinki, Department of Biological and Environmental Sciences (Finland)
2007-06-15
Despite the abundant research on material flows and the growing recognition of the need to dematerialize the economy, business enterprises are still not making the best possible use of the many opportunities for material efficiency improvements. This article proposes one possible solution: material efficiency services provided by outside suppliers. It also introduces a conceptual framework for the analysis of different business models for eco-efficient services and applies the framework to material efficiency services. Four business models are outlined and their feasibility is studied from an empirical vantage point. In contrast to much of the previous research, special emphasis is laid on the financial aspects. It appears that the most promising business models are 'material efficiency as additional service' and 'material flow management service'. Depending on the business model, prominent material efficiency service providers differ from large companies that offer multiple products and/or services to smaller, specialized providers. Potential clients (users) typically lack the resources (expertise, management's time or initial funds) to conduct material efficiency improvements themselves. Customers are more likely to use material efficiency services that relate to support materials or side-streams rather than those that are at the core of production. Potential client organizations with a strategy of outsourcing support activities and with experience of outsourcing are more keen to use material efficiency services. (author)
Efficient vector hysteresis modeling using rotationally coupled step functions
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A., E-mail: adlyamr@gmail.com; Abd-El-Hafiz, S.K., E-mail: sabdelhafiz@gmail.com
2012-05-01
Vector hysteresis models are usually used as sub-modules of field computation software tools. When dealing with a massive field computation problem, computational efficiency and practicality of such models become crucial. In this paper, generalization of a recently proposed computationally efficient vector hysteresis model based upon interacting step functions is presented. More specifically, the model is generalized to cover vector hysteresis modeling of both isotropic and anisotropic magnetic media. Model configuration details as well as experimental testing and simulation results are given in the paper.
An Efficient Virtual Trachea Deformation Model
Directory of Open Access Journals (Sweden)
Cui Tong
2016-01-01
Full Text Available In this paper, we present a virtual tactile model with the physically based skeleton to simulate force and deformation between a rigid tool and the soft organ. When the virtual trachea is handled, a skeleton model suitable for interactive environments is established, which consists of ligament layers, cartilage rings and muscular bars. In this skeleton, the contact force goes through the ligament layer, and produces the load effects of the joints , which are connecting the ligament layer and cartilage rings. Due to the nonlinear shape deformation inside the local neighbourhood of a contact region, the RBF method is applied to modify the result of linear global shape deformation by adding the nonlinear effect inside. Users are able to handle the virtual trachea, and the results from the examples with the mechanical properties of the human trachea are given to demonstrate the effectiveness of the approach.
Efficient Electromagnetic Modelling of Complex Structures
Tobon Vasquez, Jorge Alberto
2014-01-01
Part 1. Space vehicles re-entering earth's atmosphere produce a shock wave which in turns results in a bow of plasma around the vehicle body. This plasma signicantly affects all radio links between the vehicle and ground, since the electron plasma frequency reaches beyond several GHz. In this work, a model of the propagation in plasma issue is developed. The radiofrequency propagation from/to antennae installed aboard the vehicle to the ground stations (or Data Relay Satellites) can be predic...
Efficient Smoothing for Boundary Value Models
1989-12-29
IEEE Transactions on Automatic Control , vol. 29, pp. 803-821, 1984. [2] A. Bagchi and H. Westdijk, "Smoothing...and likelihood ratio for Gaussian boundary value processes," IEEE Transactions on Automatic Control , vol. 34, pp. 954-962, 1989. [3] R. Nikoukhah et...77-96, 1988. [6] H. L. Weinert and U. B. Desai, "On complementary models and fixed- interval smoothing," IEEE Transactions on Automatic Control ,
Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher
2015-05-19
Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders.
Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.
de Munck, Estefanía; Palomo, Valle; Muñoz-Sáez, Emma; Perez, Daniel I; Gómez-Miguel, Begoña; Solas, M Teresa; Gil, Carmen; Martínez, Ana; Arahuetes, Rosa M
2016-01-01
Amyotrophic lateral sclerosis (ALS) is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA), a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.
Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy
de Munck, Estefanía; Palomo, Valle; Muñoz-Sáez, Emma; Perez, Daniel I.; Gómez-Miguel, Begoña; Solas, M. Teresa; Gil, Carmen; Martínez, Ana; Arahuetes, Rosa M.
2016-01-01
Amyotrophic lateral sclerosis (ALS) is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA), a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS. PMID:27631495
An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models
DEFF Research Database (Denmark)
Nielsen, Jens Dalgaard; Jaeger, Manfred
2006-01-01
In this paper we compare Na\\ii ve Bayes (NB) models, general Bayes Net (BN) models and Probabilistic Decision Graph (PDG) models w.r.t. accuracy and efficiency. As the basis for our analysis we use graphs of size vs. likelihood that show the theoretical capabilities of the models. We also measure...
MTO1-deficient mouse model mirrors the human phenotype showing complex I defect and cardiomyopathy.
Directory of Open Access Journals (Sweden)
Lore Becker
Full Text Available Recently, mutations in the mitochondrial translation optimization factor 1 gene (MTO1 were identified as causative in children with hypertrophic cardiomyopathy, lactic acidosis and respiratory chain defect. Here, we describe an MTO1-deficient mouse model generated by gene trap mutagenesis that mirrors the human phenotype remarkably well. As in patients, the most prominent signs and symptoms were cardiovascular and included bradycardia and cardiomyopathy. In addition, the mutant mice showed a marked worsening of arrhythmias during induction and reversal of anaesthesia. The detailed morphological and biochemical workup of murine hearts indicated that the myocardial damage was due to complex I deficiency and mitochondrial dysfunction. In contrast, neurological examination was largely normal in Mto1-deficient mice. A translational consequence of this mouse model may be to caution against anaesthesia-related cardiac arrhythmias which may be fatal in patients.
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
intensive groundwater modelling case study developed with the FEFLOW software is used to evaluate the proposed methodology. Multiple surrogates of this computationally intensive model with different levels of fidelity are created and applied. Dynamically dimensioned search (DDS) optimization algorithm is used as the search engine in the calibration framework enabled with surrogate models. Results show that this framework can substantially reduce the number of original model evaluations required for calibration by intelligently utilizing faster-to-run surrogates in the course of optimization. Results also demonstrate that the compromise between efficiency (reduced run time) and fidelity of a surrogate model is critically important to the success of the framework, as a surrogate with unreasonably low fidelity, despite being fast, might be quite misleading in calibration of the original model of interest.
Efficient Approach for Semantic Web Searching Using Markov Model
Directory of Open Access Journals (Sweden)
Pradeep Salve
2012-09-01
Full Text Available The semantic search usually the web pages for the required information and filter the pages from semantic web searching unnecessary pages by using advanced algorithms. Web pages are vulnerable in answering intelligent semantic search from the user due to the confidence of their consequences on information obtainable in web pages. To get the trusted results semantic web search engines require searching for pages that maintain such information at some place including domain knowledge. The layered model of Semantic Web provides solution to this problem by providing semantic web search based on HMM for optimization of search engines tasks, specialty focusing on how to construct a new model structure to improve the extraction of web pages. We classify the search results using some search engines and some different search keywords provide a significant improvement in search accuracy. Semantic web is segmented from the elicited information of various websites based on their characteristic of semi-structure in order to improve the accuracy and efficiency of the transition matrix. Also, it optimizes the observation probability distribution and the estimation accuracy of state transition sequence by adopting the “voting strategy” and alter Viterbi algorithm. In this paper, we have presented a hybrid system that includes both hidden Markov models and rich markov model that showed the effectiveness of combining implicit search with rich Markov models for a recommender system.
Improving hospital efficiency: a process model of organizational change commitments.
Nigam, Amit; Huising, Ruthanne; Golden, Brian R
2014-02-01
Improving hospital efficiency is a critical goal for managers and policy makers. We draw on participant observation of the perioperative coaching program in seven Ontario hospitals to develop knowledge of the process by which the content of change initiatives to increase hospital efficiency is defined. The coaching program was a change initiative involving the use of external facilitators with the goal of increasing perioperative efficiency. Focusing on the role of subjective understandings in shaping initiatives to improve efficiency, we show that physicians, nurses, administrators, and external facilitators all have differing frames of the problems that limit efficiency, and propose different changes that could enhance efficiency. Dynamics of strategic and contested framing ultimately shaped hospital change commitments. We build on work identifying factors that enhance the success of change efforts to improve hospital efficiency, highlighting the importance of subjective understandings and the politics of meaning-making in defining what hospitals change.
Directory of Open Access Journals (Sweden)
Cotter Finbarr E
2009-08-01
Full Text Available Abstract Background Down syndrome (DS, caused by trisomy of human chromosome 21 (HSA21, is the most common genetic birth defect. Congenital heart defects (CHD are seen in 40% of DS children, and >50% of all atrioventricular canal defects in infancy are caused by trisomy 21, but the causative genes remain unknown. Results Here we show that aberrant adhesion and proliferation of DS cells can be reproduced using a transchromosomic model of DS (mouse fibroblasts bearing supernumerary HSA21. We also demonstrate a deacrease of cell migration in transchromosomic cells independently of their adhesion properties. We show that cell-autonomous proteome response to the presence of Collagen VI in extracellular matrix is strongly affected by trisomy 21. Conclusion This set of experiments establishes a new model system for genetic dissection of the specific HSA21 gene-overdose contributions to aberrant cell migration, adhesion, proliferation and specific proteome response to collagen VI, cellular phenotypes linked to the pathogenesis of CHD.
Estimating carbon and showing impacts of drought using satellite data in regression-tree models
Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.
2018-01-01
Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.
Modeling solar cells: A method for improving their efficiency
Energy Technology Data Exchange (ETDEWEB)
Morales-Acevedo, Arturo, E-mail: amorales@solar.cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico); Hernandez-Como, Norberto; Casados-Cruz, Gaspar [Centro de Investigacion y de Estudios Avanzados del IPN, Electrical Engineering Department, Avenida IPN No. 2508, 07360 Mexico, D.F. (Mexico)
2012-09-20
After a brief discussion on the theoretical basis for simulating solar cells and the available programs for doing this we proceed to discuss two examples that show the importance of doing numerical simulation of solar cells. We shall concentrate in silicon Heterojunction Intrinsic Thin film aSi/cSi (HIT) and CdS/CuInGaSe{sub 2} (CIGS) solar cells. In the first case, we will show that numerical simulation indicates that there is an optimum transparent conducting oxide (TCO) to be used in contact with the p-type aSi:H emitter layer although many experimental researchers might think that the results can be similar without regard of the TCO film used. In this case, it is shown that high work function TCO materials such as ZnO:Al are much better than smaller work function films such as ITO. HIT solar cells made with small work function TCO layers (<4.8 eV) will never be able to reach the high efficiencies already reported experimentally. It will also be discussed that simulations of CIGS solar cells by different groups predict efficiencies around 18-19% or even less, i.e. below the record efficiency reported experimentally (20.3%). In addition, the experimental band-gap which is optimum in this case is around 1.2 eV while several theoretical results predict a higher optimum band-gap (1.4-1.5 eV). This means that there are other effects not included in most of the simulation models developed until today. One of them is the possible presence of an interfacial (inversion) layer between CdS and CIGS. It is shown that this inversion layer might explain the smaller observed optimum band-gap, but some efficiency is lost. It is discussed that another possible explanation for the higher experimental efficiency is the possible variation of Ga concentration in the CIGS film causing a gradual variation of the band-gap. This band-gap grading might help improve the open-circuit voltage and, if it is appropriately done, it can also cause the enhancement of the photo-current density.
Efficiency of Iranian forest industry based on DEA models
Institute of Scientific and Technical Information of China (English)
Soleiman Mohammadi Limaei
2013-01-01
Data Envelopment Analysis (DEA) is a mathematical tech-nique to assess relative efficiencies of decision making units (DMUs). The efficiency of 14 Iranian forest companies and forest management units was investigated in 2010. Efficiency of the companies was esti-mated by using a traditional DEA model and a two-stage DEA model. Traditional DEA models consider all DMU activities as a black box and ignore the intermediate products, while two-stage models address inter-mediate processes. LINGO software was used for analysis. Overall pro-duction was divided into to processes for analyses by the two-stage model, timber harvest and marketing. Wilcoxon’s signed-rank test was used to identify the differences of average efficiency in the harvesting and marketing sub-process. Weak performance in the harvesting sub-process was the cause of low efficiency in 2010. Companies such as Neka Chob and Kelardasht proved efficient at timber harvest, and Neka Chob forest company scored highest in overall efficiency. Finally, the reference units identified according to the results of two-stage DEA analysis.
Ozonolysis of Model Olefins-Efficiency of Antiozonants
Huntink, N.M.; Datta, Rabin; Talma, Auke; Noordermeer, Jacobus W.M.
2006-01-01
In this study, the efficiency of several potential long lasting antiozonants was studied by ozonolysis of model olefins. 2-Methyl-2-pentene was selected as a model for natural rubber (NR) and 5-phenyl-2-hexene as a model for styrene butadiene rubber (SBR). A comparison was made between the
Efficient modelling, generation and analysis of Markov automata
Timmer, Mark
2013-01-01
Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of these models allow more types of systems to be analysed, but also raise the difficulty of their efficient an
Urban eco-efficiency and system dynamics modelling
Energy Technology Data Exchange (ETDEWEB)
Hradil, P., Email: petr.hradil@vtt.fi
2012-06-15
Assessment of urban development is generally based on static models of economic, social or environmental impacts. More advanced dynamic models have been used mostly for prediction of population and employment changes as well as for other macro-economic issues. This feasibility study was arranged to test the potential of system dynamic modelling in assessing eco-efficiency changes during urban development. (orig.)
Models for estimation of land remote sensing satellites operational efficiency
Kurenkov, Vladimir I.; Kucherov, Alexander S.
2017-01-01
The paper deals with the problem of estimation of land remote sensing satellites operational efficiency. Appropriate mathematical models have been developed. Some results obtained with the help of the software worked out in Delphi programming support environment are presented.
Directory of Open Access Journals (Sweden)
Michael J. Pelosi
2014-12-01
Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.
Efficiency Of Different Teaching Models In Teaching Of Frisbee Ultimate
Directory of Open Access Journals (Sweden)
Žuffová Zuzana
2015-05-01
Full Text Available The aim of the study was to verify the efficiency of two frisbee ultimate teaching models at 8-year grammar schools relative to age. In the experimental group was used a game based model (Teaching Games for Understanding and in the control group the traditional model based on teaching techniques. 6 groups of female students took part in experiment: experimental group 1 (n=10, age=11.6, experimental group 2 (n=12, age=13.8, experimental group 3 (n=14, age =15.8, control group 1 (n=11, age =11.7, control group 2 (n=10, age =13.8 and control group 3 (n=9, age =15.8. Efficiency of the teaching models was evaluated based of game performance and special knowledge results. Game performance was evaluated by the method of game performance assessment based on GPAI (Game Performance Assessment Instrument through video record. To verify level of knowledge, we used a knowledge test, which consisted of questions related to the rules and tactics knowledge of frisbee ultimate. To perform statistical evaluation Mann-Whitney U-test was used. Game performance assessment and knowledge level indicated higher efficiency of TGfU in general, but mostly statistically insignificant. Experimental groups 1 and 2 were significantly better in the indicator that evaluates tactical aspect of game performance - decision making (p<0.05. Experimental group 3 was better in the indicator that evaluates skill execution - disc catching. The results showed that the students of the classes taught by game based model reached partially better game performance in general. Experimental groups achieved from 79.17 % to 80 % of correct answers relating to the rules and from 75 % to 87.5 % of correct answers relating to the tactical knowledge in the knowledge test. Control groups achieved from 57.69 % to 72.22 % of correct answers relating to the rules and from 51.92 % to 72.22 % of correct answers relating to the tactical knowledge in the knowledge test.
Evaluating Energy Efficiency Policies with Energy-Economy Models
Energy Technology Data Exchange (ETDEWEB)
Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.
2010-08-01
The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.
An Efficient and Simplified Model for Forecasting using SRM
Directory of Open Access Journals (Sweden)
Hafiz Muhammad Shahzad Asif
2014-01-01
Full Text Available Learning form continuous financial systems play a vital role in enterprise operations. One of the most sophisticated non-parametric supervised learning classifiers, SVM (Support Vector Machines, provides robust and accurate results, however it may require intense computation and other resources. The heart of SLT (Statistical Learning Theory, SRM (Structural Risk Minimization Principle can also be used for model selection. In this paper, we focus on comparing the performance of model estimation using SRM with SVR (Support Vector Regression for forecasting the retail sales of consumer products. The potential benefits of an accurate sales forecasting technique in businesses are immense. Retail sales forecasting is an integral part of strategic business planning in areas such as sales planning, marketing research, pricing, production planning and scheduling. Performance comparison of support vector regression with model selection using SRM shows comparable results to SVR but in a computationally efficient manner. This research targeted the real life data to conclude the results after investigating the computer generated datasets for different types of model building
Efficient Cluster Algorithm for CP(N-1) Models
Beard, B B; Riederer, S; Wiese, U J
2006-01-01
Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z = 0.
Efficient cluster algorithm for CP(N-1) models
Beard, B. B.; Pepe, M.; Riederer, S.; Wiese, U.-J.
2006-11-01
Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z=0.
Modeling of Methods to Control Heat-Consumption Efficiency
Tsynaeva, E. A.; Tsynaeva, A. A.
2016-11-01
In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.
Management Index Systems and Energy Efficiency Diagnosis Model for Power Plant: Cases in China
Directory of Open Access Journals (Sweden)
Jing-Min Wang
2016-01-01
Full Text Available In recent years, the energy efficiency of thermal power plant largely contributes to that of the industry. A thorough understanding of influencing factors, as well as the establishment of scientific and comprehensive diagnosis model, plays a key role in the operational efficiency and competitiveness for the thermal power plant. Referring to domestic and abroad researches towards energy efficiency management, based on Cloud model and data envelopment analysis (DEA model, a qualitative and quantitative index system and a comprehensive diagnostic model (CDM are construed. To testify rationality and usability of CDM, case studies of large-scaled Chinese thermal power plants have been conducted. In this case, CDM excavates such qualitative factors as technology, management, and so forth. The results shows that, compared with conventional model, which only considered production running parameters, the CDM bears better adaption to reality. It can provide entities with efficient instruments for energy efficiency diagnosis.
Rubber particle proteins, HbREF and HbSRPP, show different interactions with model membranes.
Berthelot, Karine; Lecomte, Sophie; Estevez, Yannick; Zhendre, Vanessa; Henry, Sarah; Thévenot, Julie; Dufourc, Erick J; Alves, Isabel D; Peruch, Frédéric
2014-01-01
The biomembrane surrounding rubber particles from the hevea latex is well known for its content of numerous allergen proteins. HbREF (Hevb1) and HbSRPP (Hevb3) are major components, linked on rubber particles, and they have been shown to be involved in rubber synthesis or quality (mass regulation), but their exact function is still to be determined. In this study we highlighted the different modes of interactions of both recombinant proteins with various membrane models (lipid monolayers, liposomes or supported bilayers, and multilamellar vesicles) to mimic the latex particle membrane. We combined various biophysical methods (polarization-modulation-infrared reflection-adsorption spectroscopy (PM-IRRAS)/ellipsometry, attenuated-total reflectance Fourier-transform infrared (ATR-FTIR), solid-state nuclear magnetic resonance (NMR), plasmon waveguide resonance (PWR), fluorescence spectroscopy) to elucidate their interactions. Small rubber particle protein (SRPP) shows less affinity than rubber elongation factor (REF) for the membranes but displays a kind of "covering" effect on the lipid headgroups without disturbing the membrane integrity. Its structure is conserved in the presence of lipids. Contrarily, REF demonstrates higher membrane affinity with changes in its aggregation properties, the amyloid nature of REF, which we previously reported, is not favored in the presence of lipids. REF binds and inserts into membranes. The membrane integrity is highly perturbed, and we suspect that REF is even able to remove lipids from the membrane leading to the formation of mixed micelles. These two homologous proteins show affinity to all membrane models tested but neatly differ in their interacting features. This could imply differential roles on the surface of rubber particles.
Antiparasitic mebendazole shows survival benefit in 2 preclinical models of glioblastoma multiforme.
Bai, Ren-Yuan; Staedtke, Verena; Aprhys, Colette M; Gallia, Gary L; Riggins, Gregory J
2011-09-01
Glioblastoma multiforme (GBM) is the most common and aggressive brain cancer, and despite treatment advances, patient prognosis remains poor. During routine animal studies, we serendipitously observed that fenbendazole, a benzimidazole antihelminthic used to treat pinworm infection, inhibited brain tumor engraftment. Subsequent in vitro and in vivo experiments with benzimidazoles identified mebendazole as the more promising drug for GBM therapy. In GBM cell lines, mebendazole displayed cytotoxicity, with half-maximal inhibitory concentrations ranging from 0.1 to 0.3 µM. Mebendazole disrupted microtubule formation in GBM cells, and in vitro activity was correlated with reduced tubulin polymerization. Subsequently, we showed that mebendazole significantly extended mean survival up to 63% in syngeneic and xenograft orthotopic mouse glioma models. Mebendazole has been approved by the US Food and Drug Administration for parasitic infections, has a long track-record of safe human use, and was effective in our animal models with doses documented as safe in humans. Our findings indicate that mebendazole is a possible novel anti-brain tumor therapeutic that could be further tested in clinical trials.
Keeney, Paula M; Dunham, Lisa D; Quigley, Caitlin K; Morton, Stephanie L; Bergquist, Kristen E; Bennett, James P
2009-12-01
Sporadic Parkinson's disease (sPD) is a nervous system-wide disease that presents with a bradykinetic movement disorder and frequently progresses to include depression and cognitive impairment. Cybrid models of sPD are based on expression of sPD platelet mitochondrial DNA (mtDNA) in neural cells and demonstrate some similarities to sPD brains. In sPD and CTL cybrids we characterized aspects of mitochondrial biogenesis, mtDNA genomics, composition of the respirasome and the relationships among isolated mitochondrial and intact cell respiration. Cybrid mtDNA levels varied and correlated with expression of PGC-1 alpha, a transcriptional co-activator regulator of mitochondrial biogenesis. Levels of mtDNA heteroplasmic mutations were asymmetrically distributed across the mitochondrial genome; numbers of heteroplasmies were more evenly distributed. Neither levels nor numbers of heteroplasmies distinguished sPD from CTL. sPD cybrid mitochondrial ETC subunit protein levels were not altered. Isolated mitochondrial complex I respiration rates showed limited correlation with whole cell complex I respiration rates in both sPD and CTL cybrids. Intact cell respiration during the normoxic-anoxic transition yielded K(m) values for oxygen that directly related to respiration rates in CTL but not in sPD cell lines. Both sPD and CTL cybrid cells are substantially heterogeneous in mitochondrial genomic and physiologic properties. Our results suggest that mtDNA depletion may occur in sPD neurons and could reflect impairment of mitochondrial biogenesis. Cybrids remain a valuable model for some aspects of sPD but their heterogeneity mitigates against a simple designation of sPD phenotype in this cell model.
Efficient decoding algorithms for generalized hidden Markov model gene finders
Directory of Open Access Journals (Sweden)
Delcher Arthur L
2005-01-01
Full Text Available Abstract Background The Generalized Hidden Markov Model (GHMM has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction
Moarefian, Maryam; Pascal, Jennifer A
2016-02-01
Biobarriers imposed by the tumor microenvironment create a challenge to deliver chemotherapeutics effectively. Electric fields can be used to overcome these biobarriers in the form of electrochemotherapy, or by applying an electric field to tissue after chemotherapy has been delivered systemically. A fundamental understanding of the underlying physical phenomena governing tumor response to an applied electrical field is lacking. Building upon the work of Pascal et al. [1], a mathematical model that predicts the fraction of tumor killed due to a direct current (DC) applied electrical field and chemotherapy is developed here for tumor tissue surrounding a single, straight, cylindrical blood vessel. Results show the typical values of various parameters related to properties of the electrical field, tumor tissue and chemotherapy drug that have the most significant influence on the fraction of tumor killed. We show that the applied electrical field enhances tumor death due to chemotherapy and that the direction and magnitude of the applied electrical field have a significant impact on the fraction of tumor killed. Published by Elsevier Inc.
Model-based control of fuel cells (2): Optimal efficiency
Energy Technology Data Exchange (ETDEWEB)
Golbert, Joshua; Lewin, Daniel R. [PSE Research Group, Wolfson Department of Chemical Engineering, Technion IIT, Haifa 32000 (Israel)
2007-11-08
A dynamic PEM fuel cell model has been developed, taking into account spatial dependencies of voltage, current, material flows, and temperatures. The voltage, current, and therefore, the efficiency are dependent on the temperature and other variables, which can be optimized on the fly to achieve optimal efficiency. In this paper, we demonstrate that a model predictive controller, relying on a reduced-order approximation of the dynamic PEM fuel cell model can satisfy setpoint changes in the power demand, while at the same time, minimize fuel consumption to maximize the efficiency. The main conclusion of the paper is that by appropriate formulation of the objective function, reliable optimization of the performance of a PEM fuel cell can be performed in which the main tunable parameter is the prediction and control horizons, V and U, respectively. We have demonstrated that increased fuel efficiency can be obtained at the expense of slower responses, by increasing the values of these parameters. (author)
Modeling adaptation of carbon use efficiency in microbial communities
Directory of Open Access Journals (Sweden)
Steven D Allison
2014-10-01
Full Text Available In new microbial-biogeochemical models, microbial carbon use efficiency (CUE is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e. become less sensitive to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.
DEFF Research Database (Denmark)
Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian;
2015-01-01
two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...
Energy technologies and energy efficiency in economic modelling
DEFF Research Database (Denmark)
Klinge Jacobsen, Henrik
1998-01-01
This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological ...... of renewable energy and especially wind power will increase the rate of efficiency improvement. A technologically based model in this case indirectly makes the energy efficiency endogenous in the aggregate energy-economy model.......This paper discusses different approaches to incorporating energy technologies and technological development in energy-economic models. Technological development is a very important issue in long-term energy demand projections and in environmental analyses. Different assumptions on technological...... development are one of the main causes for the very diverging results which have been obtained using bottom-up and top-down models for analysing the costs of greenhouse gas mitigation. One of the objectives for studies comparing model results have been to create comparable model assumptions regarding...
Comparison of different efficiency criteria for hydrological model assessment
Directory of Open Access Journals (Sweden)
P. Krause
2005-01-01
Full Text Available The evaluation of hydrologic model behaviour and performance is commonly made and reported through comparisons of simulated and observed variables. Frequently, comparisons are made between simulated and measured streamflow at the catchment outlet. In distributed hydrological modelling approaches, additional comparisons of simulated and observed measurements for multi-response validation may be integrated into the evaluation procedure to assess overall modelling performance. In both approaches, single and multi-response, efficiency criteria are commonly used by hydrologists to provide an objective assessment of the "closeness" of the simulated behaviour to the observed measurements. While there are a few efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of determination, and index of agreement that are frequently used in hydrologic modeling studies and reported in the literature, there are a large number of other efficiency criteria to choose from. The selection and use of specific efficiency criteria and the interpretation of the results can be a challenge for even the most experienced hydrologist since each criterion may place different emphasis on different types of simulated and observed behaviours. In this paper, the utility of several efficiency criteria is investigated in three examples using a simple observed streamflow hydrograph.
Efficient Modelling and Generation of Markov Automata (extended version)
Timmer, Mark; Katoen, Joost-Pieter; Pol, van de Jaco; Stoelinga, Mariëlle
2012-01-01
This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the M
Evaluating energy efficiency policies with energy-economy models
Mundaca, L.; Neij, L.; Worrell, E.; McNeil, M.
2010-01-01
The growing complexities of energy systems, environmental problems, and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyz
Vortexlet models of flapping flexible wings show tuning for force production and control
Energy Technology Data Exchange (ETDEWEB)
Mountcastle, A M [Department of Organismic and Evolutionary Biology, Harvard University, Concord Field Station, Bedford, MA 01730 (United States); Daniel, T L, E-mail: mtcastle@u.washington.ed [Department of Biology, University of Washington, Seattle, WA 98195 (United States)
2010-12-15
Insect wings are compliant structures that experience deformations during flight. Such deformations have recently been shown to substantially affect induced flows, with appreciable consequences to flight forces. However, there are open questions related to the aerodynamic mechanisms underlying the performance benefits of wing deformation, as well as the extent to which such deformations are determined by the boundary conditions governing wing actuation together with mechanical properties of the wing itself. Here we explore aerodynamic performance parameters of compliant wings under periodic oscillations, subject to changes in phase between wing elevation and pitch, and magnitude and spatial pattern of wing flexural stiffness. We use a combination of computational structural mechanics models and a 2D computational fluid dynamics approach to ask how aerodynamic force production and control potential are affected by pitch/elevation phase and variations in wing flexural stiffness. Our results show that lift and thrust forces are highly sensitive to flexural stiffness distributions, with performance optima that lie in different phase regions. These results suggest a control strategy for both flying animals and engineering applications of micro-air vehicles.
Directory of Open Access Journals (Sweden)
Rastafa I Geddes
Full Text Available We recently showed that progesterone treatment can reduce lesion size and behavioral deficits after moderate-to-severe bilateral injury to the medial prefrontal cortex in immature male rats. Whether there are important sex differences in response to injury and progesterone treatment in very young subjects has not been given sufficient attention. Here we investigated progesterone's effects in the same model of brain injury but with pre-pubescent females.Twenty-eight-day-old female Sprague-Dawley rats received sham (n = 14 or controlled cortical impact (CCI (n = 21 injury, were given progesterone (8 mg/kg body weight or vehicle injections on post-injury days (PID 1-7, and underwent behavioral testing from PID 9-27. Brains were evaluated for lesion size at PID 28.Lesion size in vehicle-treated female rats with CCI injury was smaller than that previously reported for similarly treated age-matched male rats. Treatment with progesterone reduced the effect of CCI on extent of damage and behavioral deficits.Pre-pubescent female rats with midline CCI injury to the frontal cortex have reduced morphological and functional deficits following progesterone treatment. While gender differences in susceptibility to this injury were observed, progesterone treatment produced beneficial effects in young rats of both sexes following CCI.
Institute of Scientific and Technical Information of China (English)
2016-01-01
Visitors look at plane models of the Commercial Aircraft Corp. of China, developer of the count,s first homegrown large passenger jet C919, during the Singapore Airshow on February 16. The biennial event is the largest airshow in Asia and one of the most important aviation and defense shows worldwide. A number of Chinese companies took part in the event during which Okay Airways, the first privately owned aidine in China, signed a deal to acquire 12 Boeing 737 jets.
Kawaguchi, Hiroyuki; Tone, Kaoru; Tsutsui, Miki
2014-06-01
The purpose of this study was to perform an interim evaluation of the policy effect of the current reform of Japan's municipal hospitals. We focused on efficiency improvements both within hospitals and within two separate internal hospital organizations. Hospitals have two heterogeneous internal organizations: the medical examination division and administration division. The administration division carries out business management and the medical-examination division provides medical care services. We employed a dynamic-network data envelopment analysis model (DN model) to perform the evaluation. The model makes it possible to simultaneously estimate both the efficiencies of separate organizations and the dynamic changes of the efficiencies. This study is the first empirical application of the DN model in the healthcare field. Results showed that the average overall efficiency obtained with the DN model was 0.854 for 2007. The dynamic change in efficiency scores from 2007 to 2009 was slightly lower. The average efficiency score was 0.862 for 2007 and 0.860 for 2009. The average estimated efficiency of the administration division decreased from 0.867 for 2007 to 0.8508 for 2009. In contrast, the average efficiency of the medical-examination division increased from 0.858 for 2007 to 0.870 for 2009. We were unable to find any significant improvement in efficiency despite the reform policy. Thus, there are no positive policy effects despite the increased financial support from the central government.
PEB: thermal oriented architectural modeling for building energy efficiency regulations
Leclercq, Pierre; Juchmes, Roland; Delfosse, Vincent; Safin, Stéphane; Dawans, Arnaud; Dawans, Adrien
2011-01-01
As part of the overhauling of the building energy efficiency regulations (following European directive 2002/91/CE), the Wallonia and Brussels-Capital Region commissioned the LUCID to develop an optional 3D graphic encoding module to be integrated with the core energy efficiency computation engine developed by Altran Europe. Our contribution consisted mostly in analyzing the target users’ needs and representations (ergonomics, UI, interactions) and implementing a bespoke 3D CAD modeler dedicat...
Robust and efficient designs for the Michaelis-Menten model
Dette, Holger; Biedermann, Stefanie
2002-01-01
For the Michaelis-Menten model, we determine designs that maximize the minimum of the D-efficiencies over a certain interval for the nonlinear parameter. The best two point designs can be found explicitly, and a characterization is given when these designs are optimal within the class of all designs. In most cases of practical interest, the determined designs are highly efficient and robust with respect to misspecification of the nonlinear parameter. The results are illustrated and applied in...
MULTIFACTOR ECONOMETRIC MODELS FOR ENERGY EFFICIENCY IN THE EU
Directory of Open Access Journals (Sweden)
Gheorghe ZAMAN
2007-06-01
Full Text Available The present paper is approaching the energy efficiency topic from the viewpoint of its trends and influence factors, in the context of requirements, criteria and principles of sustainable development. Energy efficiency is measured as ratio of GDP and energy use and its multiple factors of influence are considered. With a view to deducing some conclusions of theoretical-methodological but also of practical-applicative character, we are researching the variation in energy efficiency in European Union, but also in the case of new candidates and other countries, by means of multifactor econometric modeling.
Showing a model's eye movements in examples does not improve learning of problem-solving tasks
van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara
2016-01-01
Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an
Showing a model's eye movements in examples does not improve learning of problem-solving tasks
van Marlen, Tim; van Wermeskerken, Margot; Jarodzka, Halszka; van Gog, Tamara
2016-01-01
Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an o
What Can the Bohr-Sommerfeld Model Show Students of Chemistry in the 21st Century?
Niaz, Mansoor; Cardellini, Liberato
2011-01-01
Bohr's model of the atom is considered to be important by general chemistry textbooks. A shortcoming of this model was that it could not explain the spectra of atoms containing more than one electron. To increase the explanatory power of the model, Sommerfeld hypothesized the existence of elliptical orbits. This study aims to elaborate a framework…
Efficient modeling of vector hysteresis using fuzzy inference systems
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A. [Electrical Power and Machines Department, Faculty of Engineering, Cairo University, Giza 12211 (Egypt)], E-mail: adlyamr@gmail.com; Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12211 (Egypt)], E-mail: sabdelhafiz@gmail.com
2008-10-01
Vector hysteresis models have always been regarded as important tools to determine which multi-dimensional magnetic field-media interactions may be predicted. In the past, considerable efforts have been focused on mathematical modeling methodologies of vector hysteresis. This paper presents an efficient approach based upon fuzzy inference systems for modeling vector hysteresis. Computational efficiency of the proposed approach stems from the fact that the basic non-local memory Preisach-type hysteresis model is approximated by a local memory model. The proposed computational low-cost methodology can be easily integrated in field calculation packages involving massive multi-dimensional discretizations. Details of the modeling methodology and its experimental testing are presented.
Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.
Simidjievski, Nikola; Todorovski, Ljupčo; Džeroski, Sašo
2016-01-01
Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting), significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.
Toward efficient riparian restoration: integrating economic, physical, and biological models.
Watanabe, Michio; Adams, Richard M; Wu, Junjie; Bolte, John P; Cox, Matt M; Johnson, Sherri L; Liss, William J; Boggess, William G; Ebersole, Joseph L
2005-04-01
This paper integrates economic, biological, and physical models to explore the efficient combination and spatial allocation of conservation efforts to protect water quality and increase salmonid populations in the Grande Ronde basin, Oregon. We focus on the effects of shade on water temperatures and the subsequent impacts on endangered juvenile salmonid populations. The integrated modeling system consists of a physical model that links riparian conditions and hydrological characteristics to water temperature; a biological model that links water temperature and riparian conditions to salmonid abundance, and an economic model that incorporates both physical and biological models to estimate minimum cost allocations of conservation efforts. Our findings indicate that conservation alternatives such as passive and active riparian restoration, the width of riparian restoration zones, and the types of vegetation used in restoration activities should be selected based on the spatial distribution of riparian characteristics in the basin. The relative effectiveness of passive and active restoration plays an important role in determining the efficient allocations of conservation efforts. The time frame considered in the restoration efforts and the magnitude of desired temperature reductions also affect the efficient combinations of restoration activities. If the objective of conservation efforts is to maximize fish populations, then fishery benefits should be directly targeted. Targeting other criterion such as water temperatures would result in different allocations of conservation efforts, and therefore are not generally efficient.
ASYMPTOTIC EFFICIENT ESTIMATION IN SEMIPARAMETRIC NONLINEAR REGRESSION MODELS
Institute of Scientific and Technical Information of China (English)
ZhuZhongyi; WeiBocheng
1999-01-01
In this paper, the estimation method based on the “generalized profile likelihood” for the conditionally parametric models in the paper given by Severini and Wong (1992) is extendedto fixed design semiparametrie nonlinear regression models. For these semiparametrie nonlinear regression models,the resulting estimator of parametric component of the model is shown to beasymptotically efficient and the strong convergence rate of nonparametric component is investigated. Many results (for example Chen (1988) ,Gao & Zhao (1993), Rice (1986) et al. ) are extended to fixed design semiparametric nonlinear regression models.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor;
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can...... be adjust-ed, allowing the modeller to focus on flood-prone locations. This results in efficiently parameterized models that can be tailored to applications. The simulated flood levels are transformed into flood extent maps using a high resolution (0.5-meter) digital terrain model in GIS. To illustrate...... the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
A ranking efficiency unit by restrictions using DEA models
Arsad, Roslah; Abdullah, Mohammad Nasir; Alias, Suriana
2014-12-01
In this paper, a comparison regarding the efficiency shares of listed companies in Bursa Malaysia was made, through the application of estimation method of Data Envelopment Analysis (DEA). In this study, DEA is used to measure efficiency shares of listed companies in Bursa Malaysia in terms of the financial performance. It is believed that only good financial performer will give a good return to the investors in the long run. The main objectives were to compute the relative efficiency scores of the shares in Bursa Malaysia and rank the shares based on Balance Index with regard to relative efficiency. The methods of analysis using Alirezaee and Afsharian's model were employed to this study; where the originality of Charnes, Cooper and Rhode model (CCR) with assumption of constant return to scale (CRS) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by using Balance Index. From the result, the companies that were recommended for investors based on ranking were NATWIDE, YTL and MUDA. These companies were the top three efficient companies with good performance in 2011 whereas in 2012 the top three companies were NATWIDE, MUDA and BERNAS.
EFFICIENCY AND COST MODELLING OF THERMAL POWER PLANTS
Directory of Open Access Journals (Sweden)
Péter Bihari
2010-01-01
Full Text Available The proper characterization of energy suppliers is one of the most important components in the modelling of the supply/demand relations of the electricity market. Power generation capacity i. e. power plants constitute the supply side of the relation in the electricity market. The supply of power stations develops as the power stations attempt to achieve the greatest profit possible with the given prices and other limitations. The cost of operation and the cost of load increment are thus the most important characteristics of their behaviour on the market. In most electricity market models, however, it is not taken into account that the efficiency of a power station also depends on the level of the load, on the type and age of the power plant, and on environmental considerations. The trade in electricity on the free market cannot rely on models where these essential parameters are omitted. Such an incomplete model could lead to a situation where a particular power station would be run either only at its full capacity or else be entirely deactivated depending on the prices prevailing on the free market. The reality is rather that the marginal cost of power generation might also be described by a function using the efficiency function. The derived marginal cost function gives the supply curve of the power station. The load level dependent efficiency function can be used not only for market modelling, but also for determining the pollutant and CO2 emissions of the power station, as well as shedding light on the conditions for successfully entering the market. Based on the measurement data our paper presents mathematical models that might be used for the determination of the load dependent efficiency functions of coal, oil, or gas fuelled power stations (steam turbine, gas turbine, combined cycle and IC engine based combined heat and power stations. These efficiency functions could also contribute to modelling market conditions and determining the
Energetics and efficiency of a molecular motor model
DEFF Research Database (Denmark)
C. Fogedby, Hans; Svane, Axel
2013-01-01
The energetics and efficiency of a linear molecular motor model proposed by Mogilner et al. (Phys. Lett. 237, 297 (1998)) is analyzed from an analytical point of view. The model which is based on protein friction with a track is described by coupled Langevin equations for the motion in combination...... with coupled master equations for the ATP hydrolysis. Here the energetics and efficiency of the motor is addressed using a many body scheme with focus on the efficiency at maximum power (EMP). It is found that the EMP is reduced from about 10 pct in a heuristic description of the motor to about 1 per mille...... when incorporating the full motor dynamics, owing to the strong dissipation associated with the motor action....
Efficient topological compilation for a weakly integral anyonic model
Bocharov, Alex; Cui, Xingshan; Kliuchnikov, Vadym; Wang, Zhenghan
2016-01-01
A class of anyonic models for universal quantum computation based on weakly-integral anyons has been recently proposed. While universal set of gates cannot be obtained in this context by anyon braiding alone, designing a certain type of sector charge measurement provides universality. In this paper we develop a compilation algorithm to approximate arbitrary n -qutrit unitaries with asymptotically efficient circuits over the metaplectic anyon model. One flavor of our algorithm produces efficient circuits with upper complexity bound asymptotically in O (32 nlog1 /ɛ ) and entanglement cost that is exponential in n . Another flavor of the algorithm produces efficient circuits with upper complexity bound in O (n 32 nlog1 /ɛ ) and no additional entanglement cost.
An Efficient Cluster Algorithm for CP(N-1) Models
Beard, B B; Riederer, S; Wiese, U J
2005-01-01
We construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a new regularization for CP(N-1) models in the framework of D-theory, which is an alternative non-perturbative approach to quantum field theory formulated in terms of discrete quantum variables instead of classical fields. Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard formulation of lattice field theory. In fact, there is even a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. We present various simulations for different correlation lengths, couplings and lattice sizes. We have simulated correlation lengths up to 250 lattice spacings on lattices as large as 640x640 and we detect no evidence for critical slowing down.
Efficient Adoption and Assessment of Multiple Process Improvement Reference Models
Directory of Open Access Journals (Sweden)
Simona Jeners
2013-06-01
Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.
Efficient Use of Preisach Hysteresis Model in Computer Aided Design
Directory of Open Access Journals (Sweden)
IONITA, V.
2013-05-01
Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.
Efficient robust nonparametric estimation in a semimartingale regression model
Konev, Victor
2010-01-01
The paper considers the problem of robust estimating a periodic function in a continuous time regression model with dependent disturbances given by a general square integrable semimartingale with unknown distribution. An example of such a noise is non-gaussian Ornstein-Uhlenbeck process with the L\\'evy process subordinator, which is used to model the financial Black-Scholes type markets with jumps. An adaptive model selection procedure, based on the weighted least square estimates, is proposed. Under general moment conditions on the noise distribution, sharp non-asymptotic oracle inequalities for the robust risks have been derived and the robust efficiency of the model selection procedure has been shown.
EFFICIENT ESTIMATION OF FUNCTIONAL-COEFFICIENT REGRESSION MODELS WITH DIFFERENT SMOOTHING VARIABLES
Institute of Scientific and Technical Information of China (English)
Zhang Riquan; Li Guoying
2008-01-01
In this article, a procedure for estimating the coefficient functions on the functional-coefficient regression models with different smoothing variables in different co-efficient functions is defined. First step, by the local linear technique and the averaged method, the initial estimates of the coefficient functions are given. Second step, based on the initial estimates, the efficient estimates of the coefficient functions are proposed by a one-step back-fitting procedure. The efficient estimators share the same asymptotic normalities as the local linear estimators for the functional-coefficient models with a single smoothing variable in different functions. Two simulated examples show that the procedure is effective.
Efficient Algorithms for Parsing the DOP Model? A Reply to Joshua Goodman
Bod, R
1996-01-01
This note is a reply to Joshua Goodman's paper "Efficient Algorithms for Parsing the DOP Model" (Goodman, 1996; cmp-lg/9604008). In his paper, Goodman makes a number of claims about (my work on) the Data-Oriented Parsing model (Bod, 1992-1996). This note shows that some of these claims must be mistaken.
Model based design of efficient power take-off systems for wave energy converters
DEFF Research Database (Denmark)
Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.
2011-01-01
an essential part of the PTO, being the only technology having the required force densities. The focus of this paper is to show the achievable efficiency of a PTO system based on a conventional hydro-static transmission topology. The design is performed using a model based approach. Generic component models...
Modeling and design of energy efficient variable stiffness actuators
Visser, L.C.; Carloni, Raffaella; Ünal, Ramazan; Stramigioli, Stefano
In this paper, we provide a port-based mathematical framework for analyzing and modeling variable stiffness actuators. The framework provides important insights in the energy requirements and, therefore, it is an important tool for the design of energy efficient variable stiffness actuators. Based
Energy efficiency in nonprofit agencies: Creating effective program models
Energy Technology Data Exchange (ETDEWEB)
Brown, M.A.; Prindle, B.; Scherr, M.I.; White, D.L.
1990-08-01
Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programs for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.
Business Models, transparency and efficient stock price formation
DEFF Research Database (Denmark)
Nielsen, Christian; Vali, Edward; Hvidberg, Rene
of this, our hypothesis is that if it is possible to improve, simplify and define the way a company communicates its business model to the market, then it must be possible for the company to create a more efficient price formation of its share. To begin with, we decided to investigate whether transparency...
A new efficient Cluster Algorithm for the Ising Model
Nyffeler, M; Wiese, U J; Nyfeler, Matthias; Pepe, Michele; Wiese, Uwe-Jens
2005-01-01
Using D-theory we construct a new efficient cluster algorithm for the Ising model. The construction is very different from the standard Swendsen-Wang algorithm and related to worm algorithms. With the new algorithm we have measured the correlation function with high precision over a surprisingly large number of orders of magnitude.
Naumis, Gerardo G
2012-06-01
When a liquid melt is cooled, a glass or phase transition can be obtained depending on the cooling rate. Yet, this behavior has not been clearly captured in energy-landscape models. Here, a model is provided in which two key ingredients are considered in the landscape, metastable states and their multiplicity. Metastable states are considered as in two level system models. However, their multiplicity and topology allows a phase transition in the thermodynamic limit for slow cooling, while a transition to the glass is obtained for fast cooling. By solving the corresponding master equation, the minimal speed of cooling required to produce the glass is obtained as a function of the distribution of metastable states.
Patrick, Christopher J; Yuan, Lester L
2017-07-01
Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.
Reexamination of the State of the Art Cloud Modeling Shows Real Improvements
Energy Technology Data Exchange (ETDEWEB)
Muehlbauer, Andreas D.; Grabowski, Wojciech W.; Malinowski, S. P.; Ackerman, Thomas P.; Bryan, George; Lebo, Zachary; Milbrandt, Jason; Morrison, H.; Ovchinnikov, Mikhail; Tessendorf, Sarah; Theriault, Julie M.; Thompson, Gregory
2013-05-25
Following up on an almost thirty year long history of International Cloud Modeling Workshops, that started out with a meeting in Irsee, Germany in 1985, the 8th International Cloud Modeling Workshop was held in July 2012 in Warsaw, Poland. The workshop, hosted by the Institute of Geophysics at the University of Warsaw, was organized by Szymon Malinowski and his local team of students and co-chaired by Wojciech Grabowski (NCAR/MMM) and Andreas Muhlbauer (University of Washington). International Cloud Modeling Workshops have been held traditionally every four years typically during the week before the International Conference on Clouds and Precipitation (ICCP) . Rooted in the World Meteorological Organization’s (WMO) weather modification program, the core objectives of the Cloud Modeling Workshop have been centered at the numerical modeling of clouds, cloud microphysics, and the interactions between cloud microphysics and cloud dynamics. In particular, the goal of the workshop is to provide insight into the pertinent problems of today’s state-of-the-art of cloud modeling and to identify key deficiencies in the microphysical representation of clouds in numerical models and cloud parameterizations. In recent years, the workshop has increasingly shifted the focus toward modeling the interactions between aerosols and clouds and provided case studies to investigate both the effects of aerosols on clouds and precipitation as well as the impact of cloud and precipitation processes on aerosols. This time, about 60 (?) scientists from about 10 (?) different countries participated in the workshop and contributed with discussions, oral and poster presentations to the workshop’s plenary and breakout sessions. Several case leaders contributed to the workshop by setting up five observationally-based case studies covering a wide range of cloud types, namely, marine stratocumulus, mid-latitude squall lines, mid-latitude cirrus clouds, Arctic stratus and winter-time orographic
DEFF Research Database (Denmark)
Nørskov, Natalja; Hedemann, Mette Skou; Theil, Peter Kappel
2013-01-01
The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study the ...
Energy efficient engine: Turbine transition duct model technology report
Leach, K.; Thurlin, R.
1982-01-01
The Low-Pressure Turbine Transition Duct Model Technology Program was directed toward substantiating the aerodynamic definition of a turbine transition duct for the Energy Efficient Engine. This effort was successful in demonstrating an aerodynamically viable compact duct geometry and the performance benefits associated with a low camber low-pressure turbine inlet guide vane. The transition duct design for the flight propulsion system was tested and the pressure loss goal of 0.7 percent was verified. Also, strut fairing pressure distributions, as well as wall pressure coefficients, were in close agreement with analytical predictions. Duct modifications for the integrated core/low spool were also evaluated. The total pressure loss was 1.59 percent. Although the increase in exit area in this design produced higher wall loadings, reflecting a more aggressive aerodynamic design, pressure profiles showed no evidence of flow separation. Overall, the results acquired have provided pertinent design and diagnostic information for the design of a turbine transition duct for both the flight propulsion system and the integrated core/low spool.
Fourrage, Cécile; Swann, Karl; Gonzalez Garcia, Jose Raul; Campbell, Anthony K; Houliston, Evelyn
2014-04-09
Green fluorescent proteins (GFPs) and calcium-activated photoproteins of the aequorin/clytin family, now widely used as research tools, were originally isolated from the hydrozoan jellyfish Aequora victoria. It is known that bioluminescence resonance energy transfer (BRET) is possible between these proteins to generate flashes of green light, but the native function and significance of this phenomenon is unclear. Using the hydrozoan Clytia hemisphaerica, we characterized differential expression of three clytin and four GFP genes in distinct tissues at larva, medusa and polyp stages, corresponding to the major in vivo sites of bioluminescence (medusa tentacles and eggs) and fluorescence (these sites plus medusa manubrium, gonad and larval ectoderms). Potential physiological functions at these sites include UV protection of stem cells for fluorescence alone, and prey attraction and camouflaging counter-illumination for bioluminescence. Remarkably, the clytin2 and GFP2 proteins, co-expressed in eggs, show particularly efficient BRET and co-localize to mitochondria, owing to parallel acquisition by the two genes of mitochondrial targeting sequences during hydrozoan evolution. Overall, our results indicate that endogenous GFPs and photoproteins can play diverse roles even within one species and provide a striking and novel example of protein coevolution, which could have facilitated efficient or brighter BRET flashes through mitochondrial compartmentalization.
Animal Models for Muscular Dystrophy Show Different Patterns of Sarcolemmal Disruption
1997-01-01
Genetic defects in a number of components of the dystrophin–glycoprotein complex (DGC) lead to distinct forms of muscular dystrophy. However, little is known about how alterations in the DGC are manifested in the pathophysiology present in dystrophic muscle tissue. One hypothesis is that the DGC protects the sarcolemma from contraction-induced damage. Using tracer molecules, we compared sarcolemmal integrity in animal models for muscular dystrophy and in muscular dystrophy patient samples. Ev...
Nash, Evelyn E.; Peters, Brian M.; Lilly, Elizabeth A.; Noverr, Mairi C.; Fidel, Paul L.
2016-01-01
Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC), particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans). Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN) recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH), and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation. PMID:26807975
Hussey, Peter S; Ridgely, M Susan; Rosenthal, Meredith B
2011-11-01
Fee-for-service payment is blamed for many of the problems observed in the US health care system. One of the leading alternative payment models proposed in the Affordable Care Act of 2010 is bundled payment, which provides payment for all of the care a patient needs over the course of a defined clinical episode, instead of paying for each discrete service. We evaluated the initial "road test" of PROMETHEUS Payment, one of several bundled payment pilot projects. The project has faced substantial implementation challenges, and none of the three pilot sites had executed contracts or made bundled payments as of May 2011. The pilots have taken longer to set up than expected, primarily because of the complexity of the payment model and the fact that it builds on the existing fee-for-service payment system and other complexities of health care. Participants continue to see promise and value in the bundled payment model, but the pilot results suggest that the desired benefits of this and other payment reforms may take time and considerable effort to materialize.
Directory of Open Access Journals (Sweden)
Evelyn E Nash
Full Text Available Candida glabrata is the second most common organism isolated from women with vulvovaginal candidiasis (VVC, particularly in women with uncontrolled diabetes mellitus. However, mechanisms involved in the pathogenesis of C. glabrata-associated VVC are unknown and have not been studied at any depth in animal models. The objective of this study was to evaluate host responses to infection following efforts to optimize a murine model of C. glabrata VVC. For this, various designs were evaluated for consistent experimental vaginal colonization (i.e., type 1 and type 2 diabetic mice, exogenous estrogen, varying inocula, and co-infection with C. albicans. Upon model optimization, vaginal fungal burden and polymorphonuclear neutrophil (PMN recruitment were assessed longitudinally over 21 days post-inoculation, together with vaginal concentrations of IL-1β, S100A8 alarmin, lactate dehydrogenase (LDH, and in vivo biofilm formation. Consistent and sustained vaginal colonization with C. glabrata was achieved in estrogenized streptozotocin-induced type 1 diabetic mice. Vaginal PMN infiltration was consistently low, with IL-1β, S100A8, and LDH concentrations similar to uninoculated mice. Biofilm formation was not detected in vivo, and co-infection with C. albicans did not induce synergistic immunopathogenic effects. This data suggests that experimental vaginal colonization of C. glabrata is not associated with an inflammatory immunopathogenic response or biofilm formation.
Model and Calculation of Container Port Logistics Enterprises Efficiency Indexes
Directory of Open Access Journals (Sweden)
Xiao Hong
2013-04-01
Full Text Available The throughput of China’s container port is growing fast, but the earnings of inland port enterprises are not so good. Firstly ,the initial efficiency evaluation indexes of port logistics are reduced and screened by rough set model, and then logistics performance indexes weight are assigned by the rough totalitarian calculation method. As well, the rank of the indexes and the important indexes are picked up by combining with ABC management method. So the port logistics enterprises can monitor the key indexes to reduce cost and improve the efficiency of the logistics operations.
Efficient Parallel Statistical Model Checking of Biochemical Networks
Directory of Open Access Journals (Sweden)
Paolo Ballarini
2009-12-01
Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.
Efficient Multigrid Preconditioners for Anisotropic Problems in Geophysical Modelling
Dedner, Andreas; Scheichl, Robert
2014-01-01
Many problems in geophysical modelling require the efficient solution of highly anisotropic elliptic partial differential equations (PDEs) in "flat" domains. For example, in numerical weather- and climate-prediction an elliptic PDE for the pressure correction has to be solved at every time step in a thin spherical shell representing the global atmosphere. This elliptic solve can be one of the computationally most demanding components in semi-implicit semi-Lagrangian time stepping methods which are very popular as they allow for larger model time steps and better overall performance. With increasing model resolution, algorithmically efficient and scalable algorithms are essential to run the code under tight operational time constraints. We discuss the theory and practical application of bespoke geometric multigrid preconditioners for equations of this type. The algorithms deal with the strong anisotropy in the vertical direction by using the tensor-product approach originally analysed by B\\"{o}rm and Hiptmair ...
Simulation modeling of reliability and efficiency of mine ventilation systems
Energy Technology Data Exchange (ETDEWEB)
Ushakov, V.K. (Moskovskii Gornyi Institut (USSR))
1991-06-01
Discusses a method developed by the MGI institute for computerized simulation of operation of ventilation systems used in deep underground coal mines. The modeling is aimed at assessment of system reliability and efficiency (probability of failure-free operation and stable air distribution). The following stages of the simulation procedure are analyzed: development of a scheme of the ventilation system (type, aerodynamic characteristics and parameters that describe system elements, e.g. ventilation tunnels, ventilation equipment, main blowers etc., dynamics of these parameters depending among others on mining and geologic conditions), development of mathematical models that describe system characteristics as well as external factors and their effects on the system, development of a structure of the simulated ventilation system, development of an algorithm, development of the final computer program for simulation of a mine ventilation system. Use of the model for forecasting reliability of air supply and efficiency of mine ventilation is discussed. 2 refs.
Efficiency of Evolutionary Algorithms for Calibration of Watershed Models
Ahmadi, M.; Arabi, M.
2009-12-01
Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes
Turn-Taking Model in the Chinese Recruitment Reality show-BelongtoYou
Institute of Scientific and Technical Information of China (English)
AI Fan-qing
2014-01-01
Based on the theories of conversational analysis proposed by Sacks et al,this paper chooses excerpts of candidates’inter-view from the Chinese recruitment reality TV show BelongtoYou in Tianjin TV. Through analyzing the excerpt, how the rules of turn-taking are applied in this program will be demonstrated. And the features of turn-taking strategies used by the host,candi-dates and bosses will be concluded.
Global thermal niche models of two European grasses show high invasion risks in Antarctica.
Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel
2016-12-14
The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species.
An efficient method for solving fractional Hodgkin–Huxley model
Energy Technology Data Exchange (ETDEWEB)
Nagy, A.M., E-mail: abdelhameed_nagy@yahoo.com [Department of Mathematics, Faculty of Science, Benha University, 13518 Benha (Egypt); Sweilam, N.H., E-mail: n_sweilam@yahoo.com [Department of Mathematics, Faculty of Science, Cairo University, 12613 Giza (Egypt)
2014-06-13
In this paper, we present an accurate numerical method for solving fractional Hodgkin–Huxley model. A non-standard finite difference method (NSFDM) is implemented to study the dynamic behaviors of the proposed model. The Grünwald–Letinkov definition is used to approximate the fractional derivatives. Numerical results are presented graphically reveal that NSFDM is easy to implement, effective and convenient for solving the proposed model. - Highlights: • An accurate numerical method for solving fractional Hodgkin–Huxley model is given. • Non-standard finite difference method (NSFDM) is implemented to the proposed model. • NSFDM can solve differential equations involving derivatives of non-integer order. • NDFDM is very powerful and efficient technique for solving the proposed model.
Modeling of detective quantum efficiency considering scatter-reduction devices
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.
Directory of Open Access Journals (Sweden)
Rastafa I Geddes
Full Text Available PURPOSE: Controlled cortical impact (CCI models in adult and aged Sprague-Dawley (SD rats have been used extensively to study medial prefrontal cortex (mPFC injury and the effects of post-injury progesterone treatment, but the hormone's effects after traumatic brain injury (TBI in juvenile animals have not been determined. In the present proof-of-concept study we investigated whether progesterone had neuroprotective effects in a pediatric model of moderate to severe bilateral brain injury. METHODS: Twenty-eight-day old (PND 28 male Sprague Dawley rats received sham (n = 24 or CCI (n = 47 injury and were given progesterone (4, 8, or 16 mg/kg per 100 g body weight or vehicle injections on post-injury days (PID 1-7, subjected to behavioral testing from PID 9-27, and analyzed for lesion size at PID 28. RESULTS: The 8 and 16 mg/kg doses of progesterone were observed to be most beneficial in reducing the effect of CCI on lesion size and behavior in PND 28 male SD rats. CONCLUSION: Our findings suggest that a midline CCI injury to the frontal cortex will reliably produce a moderate TBI comparable to what is seen in the adult male rat and that progesterone can ameliorate the injury-induced deficits.
A model SN2 reaction ‘on water’ does not show rate enhancement
Nelson, Katherine V.; Benjamin, Ilan
2011-05-01
Molecular dynamics calculations of the benchmark nucleophilic substitution reaction (SN2) Cl- + CH3Cl are carried out at the water liquid/vapor interface. The reaction free energy profile and the activation free energy are determined as a function of the reactants' location normal to the surface. The activation free energy remains almost constant relative to that in bulk water, despite the fact that the barrier is expected to significantly decrease as the reaction is carried out near the vapor phase. We show that this is due to the combined effects of a clustering of water molecules around the nucleophile and a relatively weak hydration of the transition state.
Efficient anisotropic wavefield extrapolation using effective isotropic models
Alkhalifah, Tariq Ali
2013-06-10
Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented in the high-frequency asymptotic approximation by the eikonal equation, to develop effective isotropic models, which are used to efficiently and approximately extrapolate anisotropic wavefields using the isotropic, relatively cheaper, operators. These effective velocity models are source dependent and tend to embed the anisotropy in the inhomogeneity. Though this isotropically generated wavefield theoretically shares the same kinematic behavior as that of the first arrival anisotropic wavefield, it also has the ability to include all the arrivals resulting from a complex wavefield propagation. In fact, the effective models reduce to the original isotropic model in the limit of isotropy, and thus, the difference between the effective model and, for example, the vertical velocity depends on the strength of anisotropy. For reverse time migration (RTM), effective models are developed for the source and receiver fields by computing the traveltime for a plane wave source stretching along our source and receiver lines in a delayed shot migration implementation. Applications to the BP TTI model demonstrates the effectiveness of the approach.
In vitro and in vivo models of Huntington's disease show alterations in the endocannabinoid system.
Bari, Monica; Battista, Natalia; Valenza, Marta; Mastrangelo, Nicolina; Malaponti, Marinella; Catanzaro, Giuseppina; Centonze, Diego; Finazzi-Agrò, Alessandro; Cattaneo, Elena; Maccarrone, Mauro
2013-07-01
In this study, we analyzed the components of the endocannabinoid system (ECS) in R6/2 mice, a widely used model of Huntington's disease (HD). We measured the endogenous content of N-arachidonoylethanolamine and 2-arachidonoylglycerol and the activity of their biosynthetic enzymes (N-acyl-phosphatidylethanolamine-hydrolyzing phospholipase D and diacylglycerol lipase, respectively) and hydrolytic enzymes [fatty acid amide hydrolase (FAAH) and monoacylglycerol lipase, respectively] and of their target receptors (type 1 cannabinoid receptor, type 2 cannabinoid receptor, and transient receptor potential vanilloid-1) in the brains of wild-type and R6/2 mice of different ages, as well as in the striatum and cortex of 12-week-old animals. In addition, we measured FAAH activity in lymphocytes of R6/2 mice. In the whole brains of 12-week-old R6/2 mice, we found reductions in N-acyl-phosphatidylethanolamine-hydrolyzing phospholipase D activity, diacylglycerol lipase activity and cannabinoid receptor binding, mostly associated with changes in the striatum but not in the cortex, as well as an increase in 2-arachidonoylglycerol content as compared with wild-type littermates, without any other change in ECS elements. Then, our analysis was extended to HD43 cells, an inducible cellular model of HD derived from rat ST14A cells. In both induced and noninduced conditions, we demonstrated a fully functional ECS. Overall, our data suggest that the ECS is differently affected in mouse and human HD, and that HD43 cells are suitable for high-throughput screening of FAAH-oriented drugs affecting HD progression.
Higgins, R M; Johnson, R; Jones, M N A; Rudge, C
2005-03-01
It is proposed that equity is a trade-off, or compromise, between equality and efficiency. The kidney transplant allocation algorithm currently used in the United Kingdom (NAT) was tested in the efficiency-equity model. In an exercise of 2000 past UK donors and a dynamic waiting list of 5000 potential recipients, 4000 transplants were allocated according either by NAT, by equal allocation (EQ) (a lottery), or by efficiency (EF). Diabetic recipients received 7.4% of transplants in NAT, 8.6% in EQ, and 0% in EF; paediatric recipients received 6.8% in NAT, 0.6% in EQ, and 0.7% in EF model. For HLA matching, there were 77.9% favourable or 000 matches in NAT, 3.0% in EQ, and 53.1% in EF. Predicted survival showed better outcomes in EF versus NAT (P < .0001) and in NAT versus EQ (P = .05). The NAT allocation system favours paediatric recipients and does not deny diabetics the chance of a transplant, broadly in line with published public and professional opinions. The NAT scheme achieves better HLA matching than the EF model, and this suggests that the rationale for allocation based primarily on HLA matching could be reexamined.
Energy efficiency model for small/medium geothermal heat pump systems
Directory of Open Access Journals (Sweden)
Staiger Robert
2015-06-01
Full Text Available Heating application efficiency is a crucial point for saving energy and reducing greenhouse gas emissions. Today, EU legal framework conditions clearly define how heating systems should perform, how buildings should be designed in an energy efficient manner and how renewable energy sources should be used. Using heat pumps (HP as an alternative “Renewable Energy System” could be one solution for increasing efficiency, using less energy, reducing the energy dependency and reducing greenhouse gas emissions. This scientific article will take a closer look at the different efficiency dependencies of such geothermal HP (GHP systems for domestic buildings (small/medium HP. Manufacturers of HP appliances must document the efficiency, so called COP (Coefficient of Performance in the EU under certain standards. In technical datasheets of HP appliances, these COP parameters give a clear indication of the performance quality of a HP device. HP efficiency (COP and the efficiency of a working HP system can vary significantly. For this reason, an annual efficiency statistic named “Seasonal Performance Factor” (SPF has been defined to get an overall efficiency for comparing HP Systems. With this indicator, conclusions can be made from an installation, economy, environmental, performance and a risk point of view. A technical and economic HP model shows the dependence of energy efficiency problems in HP systems. To reduce the complexity of the HP model, only the important factors for efficiency dependencies are used. Dynamic and static situations with HP´s and their efficiency are considered. With the latest data from field tests of HP Systems and the practical experience over the last 10 years, this information will be compared with one of the latest simulation programs with the help of two practical geothermal HP system calculations. With the result of the gathered empirical data, it allows for a better estimate of the HP system efficiency, their
Increased Statistical Efficiency in a Lognormal Mean Model
Directory of Open Access Journals (Sweden)
Grant H. Skrepnek
2014-01-01
Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.
I.P. van Staveren (Irene)
2009-01-01
textabstractThe dominant economic theory, neoclassical economics, employs a single economic evaluative criterion: efficiency. Moreover, it assigns this criterion a very specific meaning. Other – heterodox – schools of thought in economics tend to use more open concepts of efficiency, related to comm
A zebrafish model of glucocorticoid resistance shows serotonergic modulation of the stress response
Directory of Open Access Journals (Sweden)
Brian eGriffiths
2012-10-01
Full Text Available One function of glucocorticoids is to restore homeostasis after an acute stress response by providing negative feedback to stress circuits in the brain. Loss of this negative feedback leads to elevated physiological stress and may contribute to depression, anxiety and post-traumatic stress disorder. We investigated the early, developmental effects of glucocorticoid signaling deficits on stress physiology and related behaviors using a mutant zebrafish, grs357, with non-functional glucocorticoid receptors. These mutants are morphologically inconspicuous and adult-viable. A previous study of adult grs357 mutants showed loss of glucocorticoid-mediated negative feedback and elevated physiological and behavioral stress markers. Already at five days post-fertilization, mutant larvae had elevated whole body cortisol, increased expression of pro-opiomelanocortin (POMC, the precursor of adrenocorticotropic hormone (ACTH, and failed to show normal suppression of stress markers after dexamethasone treatment. Mutant larvae had larger auditory-evoked startle responses compared to wildtype sibling controls (grwt, despite having lower spontaneous activity levels. Fluoxetine (Prozac treatment in mutants decreased startle responding and increased spontaneous activity, making them behaviorally similar to wildtype. This result mirrors known effects of selective serotonin reuptake inhibitors (SSRIs in modifying glucocorticoid signaling and alleviating stress disorders in human patients. Our results suggest that larval grs357 zebrafish can be used to study behavioral, physiological and molecular aspects of stress disorders. Most importantly, interactions between glucocorticoid and serotonin signaling appear to be highly conserved among vertebrates, suggesting deep homologies at the neural circuit level and opening up new avenues for research into psychiatric conditions.
The atherogenic Scarb1 null mouse model shows a high bone mass phenotype.
Martineau, Corine; Martin-Falstrault, Louise; Brissette, Louise; Moreau, Robert
2014-01-01
Scavenger receptor class B, type I (SR-BI), the Scarb1 gene product, is a receptor associated with cholesteryl ester uptake from high-density lipoproteins (HDL), which drives cholesterol movement from peripheral tissues toward the liver for excretion, and, consequently, Scarb1 null mice are prone to atherosclerosis. Because studies have linked atherosclerosis incidence with osteoporosis, we characterized the bone metabolism in these mice. Bone morphometry was assessed through microcomputed tomography and histology. Marrow stromal cells (MSCs) were used to characterize influence of endogenous SR-BI in cell functions. Total and HDL-associated cholesterol in null mice were increased by 32-60%, correlating with its role in lipoprotein metabolism. Distal metaphyses from 2- and 4-mo-old null mice showed correspondingly 46 and 37% higher bone volume fraction associated with a higher number of trabeculae. Histomorphometric analyses in 2-mo-old null male mice revealed 1.42-fold greater osteoblast surface, 1.37-fold higher percent mineralizing surface, and 1.69-fold enhanced bone formation rate. In vitro assays for MSCs from null mice revealed 37% higher proliferation rate, 48% more alkaline phosphatase activity, 70% greater mineralization potential and a 2-fold osterix (Sp7) expression, yet a 0.5-fold decrease in caveolin-1 (Cav1) expression. Selective uptake levels of HDL-associated cholesteryl oleate and estradiol were similar between MSC from wild-type and Scarb1 null mice, suggesting that its contribution to this process is not its main role in these cells. However, Scarb1 knockout stunted the HDL-dependent regulation of Cav1 genic expression. Scarb1 null mice are not prone to osteoporosis but show higher bone mass associated with enhanced bone formation.
Selen, Ebru Selin; Bolandnazar, Zeinab; Tonelli, Marco; Bütz, Daniel E; Haviland, Julia A; Porter, Warren P; Assadi-Porter, Fariba M
2015-08-07
Polycystic ovary syndrome (PCOS) is associated with metabolic and endocrine disorders in women of reproductive age. The etiology of PCOS is still unknown. Mice prenatally treated with glucocorticoids exhibit metabolic disturbances that are similar to those seen in women with PCOS. We used an untargeted nuclear magnetic resonance (NMR)-based metabolomics approach to understand the metabolic changes occurring in the plasma and kidney over time in female glucocorticoid-treated (GC-treated) mice. There are significant changes in plasma amino acid levels (valine, tyrosine, and proline) and their intermediates (2-hydroxybutyrate, 4-aminobutyrate, and taurine), whereas in kidneys, the TCA cycle metabolism (citrate, fumarate, and succinate) and the pentose phosphate (PP) pathway products (inosine and uracil) are significantly altered (p metabolic substrates in the plasma and kidneys of treated mice are associated with altered amino acid metabolism, increased cytoplasmic PP, and increased mitochondrial activity, leading to a more oxidized state. This study identifies biomarkers associated with metabolic dysfunction in kidney mitochondria of a prenatal gluococorticoid-treated mouse model of PCOS that may be used as early predictive biomarkers of oxidative stress in the PCOS metabolic disorder in women.
Efficient multilevel brain tumor segmentation with integrated bayesian model classification.
Corso, J J; Sharon, E; Dube, S; El-Saden, S; Sinha, U; Yuille, A
2008-05-01
We present a new method for automatic segmentation of heterogeneous image data that takes a step toward bridging the gap between bottom-up affinity-based segmentation methods and top-down generative model based approaches. The main contribution of the paper is a Bayesian formulation for incorporating soft model assignments into the calculation of affinities, which are conventionally model free. We integrate the resulting model-aware affinities into the multilevel segmentation by weighted aggregation algorithm, and apply the technique to the task of detecting and segmenting brain tumor and edema in multichannel magnetic resonance (MR) volumes. The computationally efficient method runs orders of magnitude faster than current state-of-the-art techniques giving comparable or improved results. Our quantitative results indicate the benefit of incorporating model-aware affinities into the segmentation process for the difficult case of glioblastoma multiforme brain tumor.
Verification of Embedded Memory Systems using Efficient Memory Modeling
Ganai, Malay K; Ashar, Pranav
2011-01-01
We describe verification techniques for embedded memory systems using efficient memory modeling (EMM), without explicitly modeling each memory bit. We extend our previously proposed approach of EMM in Bounded Model Checking (BMC) for a single read/write port single memory system, to more commonly occurring systems with multiple memories, having multiple read and write ports. More importantly, we augment such EMM to providing correctness proofs, in addition to finding real bugs as before. The novelties of our verification approach are in a) combining EMM with proof-based abstraction that preserves the correctness of a property up to a certain analysis depth of SAT-based BMC, and b) modeling arbitrary initial memory state precisely and thereby, providing inductive proofs using SAT-based BMC for embedded memory systems. Similar to the previous approach, we construct a verification model by eliminating memory arrays, but retaining the memory interface signals with their control logic and adding constraints on tho...
Metabolic remodeling agents show beneficial effects in the dystrophin-deficient mdx mouse model
Directory of Open Access Journals (Sweden)
Jahnke Vanessa E
2012-08-01
Full Text Available Abstract Background Duchenne muscular dystrophy is a genetic disease involving a severe muscle wasting that is characterized by cycles of muscle degeneration/regeneration and culminates in early death in affected boys. Mitochondria are presumed to be involved in the regulation of myoblast proliferation/differentiation; enhancing mitochondrial activity with exercise mimetics (AMPK and PPAR-delta agonists increases muscle function and inhibits muscle wasting in healthy mice. We therefore asked whether metabolic remodeling agents that increase mitochondrial activity would improve muscle function in mdx mice. Methods Twelve-week-old mdx mice were treated with two different metabolic remodeling agents (GW501516 and AICAR, separately or in combination, for 4 weeks. Extensive systematic behavioral, functional, histological, biochemical, and molecular tests were conducted to assess the drug(s' effects. Results We found a gain in body and muscle weight in all treated mice. Histologic examination showed a decrease in muscle inflammation and in the number of fibers with central nuclei and an increase in fibers with peripheral nuclei, with significantly fewer activated satellite cells and regenerating fibers. Together with an inhibition of FoXO1 signaling, these results indicated that the treatments reduced ongoing muscle damage. Conclusions The three treatments produced significant improvements in disease phenotype, including an increase in overall behavioral activity and significant gains in forelimb and hind limb strength. Our findings suggest that triggering mitochondrial activity with exercise mimetics improves muscle function in dystrophin-deficient mdx mice.
Male Wistar rats show individual differences in an animal model of conformity.
Jolles, Jolle W; de Visser, Leonie; van den Bos, Ruud
2011-09-01
Conformity refers to the act of changing one's behaviour to match that of others. Recent studies in humans have shown that individual differences exist in conformity and that these differences are related to differences in neuronal activity. To understand the neuronal mechanisms in more detail, animal tests to assess conformity are needed. Here, we used a test of conformity in rats that has previously been evaluated in female, but not male, rats and assessed the nature of individual differences in conformity. Male Wistar rats were given the opportunity to learn that two diets differed in palatability. They were subsequently exposed to a demonstrator that had consumed the less palatable food. Thereafter, they were exposed to the same diets again. Just like female rats, male rats decreased their preference for the more palatable food after interaction with demonstrator rats that had eaten the less palatable food. Individual differences existed for this shift, which were only weakly related to an interaction between their own initial preference and the amount consumed by the demonstrator rat. The data show that this conformity test in rats is a promising tool to study the neurobiology of conformity.
Semantic Web Based Efficient Search Using Ontology and Mathematical Model
Directory of Open Access Journals (Sweden)
K.Palaniammal
2014-01-01
Full Text Available The semantic web is the forthcoming technology in the world of search engine. It becomes mainly focused towards the search which is more meaningful rather than the syntactic search prevailing now. This proposed work concerns about the semantic search with respect to the educational domain. In this paper, we propose semantic web based efficient search using ontology and mathematical model that takes into account the misleading, unmatched kind of service information, lack of relevant domain knowledge and the wrong service queries. To solve these issues in this framework is designed to make three major contributions, which are ontology knowledge base, Natural Language Processing (NLP techniques and search model. Ontology knowledge base is to store domain specific service ontologies and service description entity (SDE metadata. The search model is to retrieve SDE metadata as efficient for Education lenders, which include mathematical model. The Natural language processing techniques for spell-check and synonym based search. The results are retrieved and stored in an ontology, which in terms prevents the data redundancy. The results are more accurate to search, sensitive to spell check and synonymous context. This paper reduces the user’s time and complexity in finding for the correct results of his/her search text and our model provides more accurate results. A series of experiments are conducted in order to respectively evaluate the mechanism and the employed mathematical model.
On efficient Bayesian inference for models with stochastic volatility
Griffin, Jim E.; Sakaria, Dhirendra Kumar
2016-01-01
An efficient method for Bayesian inference in stochastic volatility models uses a linear state space representation to define a Gibbs sampler in which the volatilities are jointly updated. This method involves the choice of an offset parameter and we illustrate how its choice can have an important effect on the posterior inference. A Metropolis-Hastings algorithm is developed to robustify this approach to choice of the offset parameter. The method is illustrated on simulated data with known p...
Detailed models for timing and efficiency in resistive plate chambers
Riegler, Werner
2003-01-01
We discuss detailed models for detector physics processes in Resistive Plate Chambers, in particular including the effect of attachment on the avalanche statistics. In addition, we present analytic formulas for average charges and intrinsic RPC time resolution. Using a Monte Carlo simulation including all the steps from primary ionization to the front-end electronics we discuss the dependence of efficiency and time resolution on parameters like primary ionization, avalanche statistics and threshold.
Modeling, Control and Energy Efficiency of Underwater Snake Robots
Kelasidi, Eleni
2015-01-01
This thesis is mainly motivated by the attribute of the snake robots that they are able to move over land as well as underwater while the physiology of the robot remains the same. This adaptability to different motion demands depending on the environment is one of the main characteristics of the snake robots. In particular, this thesis targets several interesting aspects regarding the modeling, control and energy efficiency of the underwater snake robots. This thesis address...
Investigating market efficiency through a forecasting model based on differential equations
de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco
2017-05-01
A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.
Efficiency at maximum power and efficiency fluctuations in a linear Brownian heat-engine model
Park, Jong-Min; Chun, Hyun-Myung; Noh, Jae Dong
2016-07-01
We investigate the stochastic thermodynamics of a two-particle Langevin system. Each particle is in contact with a heat bath at different temperatures T1 and T2 (autonomous heat engine performing work against the external driving force. Linearity of the system enables us to examine thermodynamic properties of the engine analytically. We find that the efficiency of the engine at maximum power ηM P is given by ηM P=1 -√{T2/T1 } . This universal form has been known as a characteristic of endoreversible heat engines. Our result extends the universal behavior of ηM P to nonendoreversible engines. We also obtain the large deviation function of the probability distribution for the stochastic efficiency in the overdamped limit. The large deviation function takes the minimum value at macroscopic efficiency η =η ¯ and increases monotonically until it reaches plateaus when η ≤ηL and η ≥ηR with model-dependent parameters ηR and ηL.
Increasing Computational Efficiency of Cochlear Models Using Boundary Layers
Alkhairy, Samiya A.; Shera, Christopher A.
2016-01-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Increasing computational efficiency of cochlear models using boundary layers
Alkhairy, Samiya A.; Shera, Christopher A.
2015-12-01
Our goal is to develop methods to improve the efficiency of computational models of the cochlea for applications that require the solution accurately only within a basal region of interest, specifically by decreasing the number of spatial sections needed for simulation of the problem with good accuracy. We design algebraic spatial and parametric transformations to computational models of the cochlea. These transformations are applied after the basal region of interest and allow for spatial preservation, driven by the natural characteristics of approximate spatial causality of cochlear models. The project is of foundational nature and hence the goal is to design, characterize and develop an understanding and framework rather than optimization and globalization. Our scope is as follows: designing the transformations; understanding the mechanisms by which computational load is decreased for each transformation; development of performance criteria; characterization of the results of applying each transformation to a specific physical model and discretization and solution schemes. In this manuscript, we introduce one of the proposed methods (complex spatial transformation) for a case study physical model that is a linear, passive, transmission line model in which the various abstraction layers (electric parameters, filter parameters, wave parameters) are clearer than other models. This is conducted in the frequency domain for multiple frequencies using a second order finite difference scheme for discretization and direct elimination for solving the discrete system of equations. The performance is evaluated using two developed simulative criteria for each of the transformations. In conclusion, the developed methods serve to increase efficiency of a computational traveling wave cochlear model when spatial preservation can hold, while maintaining good correspondence with the solution of interest and good accuracy, for applications in which the interest is in the solution
Efficient estimation of moments in linear mixed models
Wu, Ping; Zhu, Li-Xing; 10.3150/10-BEJ330
2012-01-01
In the linear random effects model, when distributional assumptions such as normality of the error variables cannot be justified, moments may serve as alternatives to describe relevant distributions in neighborhoods of their means. Generally, estimators may be obtained as solutions of estimating equations. It turns out that there may be several equations, each of them leading to consistent estimators, in which case finding the efficient estimator becomes a crucial problem. In this paper, we systematically study estimation of moments of the errors and random effects in linear mixed models.
The Marine Virtual Laboratory: enabling efficient ocean model configuration
Directory of Open Access Journals (Sweden)
P. R. Oke
2015-11-01
Full Text Available The technical steps involved in configuring a regional ocean model are analogous for all community models. All require the generation of a model grid, preparation and interpolation of topography, initial conditions, and forcing fields. Each task in configuring a regional ocean model is straight-forward – but the process of downloading and reformatting data can be time-consuming. For an experienced modeller, the configuration of a new model domain can take as little as a few hours – but for an inexperienced modeller, it can take much longer. In pursuit of technical efficiency, the Australian ocean modelling community has developed the Web-based MARine Virtual Laboratory (WebMARVL. WebMARVL allows a user to quickly and easily configure an ocean general circulation or wave model through a simple interface, reducing the time to configure a regional model to a few minutes. Through WebMARVL, a user is prompted to define the basic options needed for a model configuration, including the: model, run duration, spatial extent, and input data. Once all aspects of the configuration are selected, a series of data extraction, reprocessing, and repackaging services are run, and a "take-away bundle" is prepared for download. Building on the capabilities developed under Australia's Integrated Marine Observing System, WebMARVL also extracts all of the available observations for the chosen time-space domain. The user is able to download the take-away bundle, and use it to run the model of their choice. Models supported by WebMARVL include three community ocean general circulation models, and two community wave models. The model configuration from the take-away bundle is intended to be a starting point for scientific research. The user may subsequently refine the details of the model set-up to improve the model performance for the given application. In this study, WebMARVL is described along with a series of results from test cases comparing Web
A Multi-objective Procedure for Efficient Regression Modeling
Sinha, Ankur; Kuosmanen, Timo
2012-01-01
Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces a technique called the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which provides the user with an efficient set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, where the purpose is to choose those models over the other which have less number of regression coefficients and better goodness of fit. In MOGA-VS, the model selection procedure is implemented in two steps. First, we generate the frontier of all efficient or non-dominated regression m...
Efficiently parallelized modeling of tightly focused, large bandwidth laser pulses
Dumont, Joey; Lefebvre, Catherine; Gagnon, Denis; MacLean, Steve
2016-01-01
The Stratton-Chu integral representation of electromagnetic fields is used to study the spatio-temporal properties of large bandwidth laser pulses focused by high numerical aperture mirrors. We review the formal aspects of the derivation of diffraction integrals from the Stratton-Chu representation and discuss the use of the Hadamard finite part in the derivation of the physical optics approximation. By analyzing the formulation we show that, for the specific case of a parabolic mirror, the integrands involved in the description of the reflected field near the focal spot do not possess the strong oscillations characteristic of diffraction integrals. Consequently, the integrals can be evaluated with simple and efficient quadrature methods rather than with specialized, more costly approaches. We report on the development of an efficiently parallelized algorithm that evaluates the Stratton-Chu diffraction integrals for incident fields of arbitrary temporal and spatial dependence. We use our method to show that t...
Towards an efficient multiphysics model for nuclear reactor dynamics
Directory of Open Access Journals (Sweden)
Obaidurrahman K.
2015-01-01
Full Text Available Availability of fast computer resources nowadays has facilitated more in-depth modeling of complex engineering systems which involve strong multiphysics interactions. This multiphysics modeling is an important necessity in nuclear reactor safety studies where efforts are being made worldwide to combine the knowledge from all associated disciplines at one place to accomplish the most realistic simulation of involved phenomenon. On these lines coupled modeling of nuclear reactor neutron kinetics, fuel heat transfer and coolant transport is a regular practice nowadays for transient analysis of reactor core. However optimization between modeling accuracy and computational economy has always been a challenging task to ensure the adequate degree of reliability in such extensive numerical exercises. Complex reactor core modeling involves estimation of evolving 3-D core thermal state, which in turn demands an expensive multichannel based detailed core thermal hydraulics model. A novel approach of power weighted coupling between core neutronics and thermal hydraulics presented in this work aims to reduce the bulk of core thermal calculations in core dynamics modeling to a significant extent without compromising accuracy of computation. Coupled core model has been validated against a series of international benchmarks. Accuracy and computational efficiency of the proposed multiphysics model has been demonstrated by analyzing a reactivity initiated transient.
Modeling of hybrid vehicle fuel economy and fuel engine efficiency
Wu, Wei
"Near-CV" (i.e., near-conventional vehicle) hybrid vehicles, with an internal combustion engine, and a supplementary storage with low-weight, low-energy but high-power capacity, are analyzed. This design avoids the shortcoming of the "near-EV" and the "dual-mode" hybrid vehicles that need a large energy storage system (in terms of energy capacity and weight). The small storage is used to optimize engine energy management and can provide power when needed. The energy advantage of the "near-CV" design is to reduce reliance on the engine at low power, to enable regenerative braking, and to provide good performance with a small engine. The fuel consumption of internal combustion engines, which might be applied to hybrid vehicles, is analyzed by building simple analytical models that reflect the engines' energy loss characteristics. Both diesel and gasoline engines are modeled. The simple analytical models describe engine fuel consumption at any speed and load point by describing the engine's indicated efficiency and friction. The engine's indicated efficiency and heat loss are described in terms of several easy-to-obtain engine parameters, e.g., compression ratio, displacement, bore and stroke. Engine friction is described in terms of parameters obtained by fitting available fuel measurements on several diesel and spark-ignition engines. The engine models developed are shown to conform closely to experimental fuel consumption and motored friction data. A model of the energy use of "near-CV" hybrid vehicles with different storage mechanism is created, based on simple algebraic description of the components. With powertrain downsizing and hybridization, a "near-CV" hybrid vehicle can obtain a factor of approximately two in overall fuel efficiency (mpg) improvement, without considering reductions in the vehicle load.
The composite supply chain efficiency model: A case study of the Sishen-Saldanha supply chain
Directory of Open Access Journals (Sweden)
Leila L. Goedhals-Gerber
2016-01-01
Full Text Available As South Africa strives to be a major force in global markets, it is essential that South African supply chains achieve and maintain a competitive advantage. One approach to achieving this is to ensure that South African supply chains maximise their levels of efficiency. Consequently, the efficiency levels of South Africa’s supply chains must be evaluated. The objective of this article is to propose a model that can assist South African industries in becoming internationally competitive by providing them with a tool for evaluating their levels of efficiency both as individual firms and as a component in an overall supply chain. The Composite Supply Chain Efficiency Model (CSCEM was developed to measure supply chain efficiency across supply chains using variables identified as problem areas experienced by South African supply chains. The CSCEM is tested in this article using the Sishen-Saldanda iron ore supply chain as a case study. The results indicate that all three links or nodes along the Sishen-Saldanha iron ore supply chain performed well. The average efficiency of the rail leg was 97.34%, while the average efficiency of the mine and the port were 97% and 95.44%, respectively. The results also show that the CSCEM can be used by South African firms to measure their levels of supply chain efficiency. This article concludes with the benefits of the CSCEM.
FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance
Energy Technology Data Exchange (ETDEWEB)
Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.
2015-05-04
The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).
Lippa, Richard A
2013-02-01
Do self-identified bisexual men and women actually show bisexual patterns of sexual attraction and interest? To answer this question, I studied bisexual men's and women's sexual attraction to photographed male and female "swimsuit models" that varied in attractiveness. Participants (663 college students and gay pride attendees, including 14 self-identified bisexual men and 17 self-identified bisexual women) rated their degree of sexual attraction to 34 male and 34 female swimsuit models. Participants' viewing times to models were unobtrusively assessed. Results showed that bisexual men and women showed bisexual patterns of attraction and viewing times to photo models, which strongly distinguished them from same-sex heterosexual and homosexual participants. In contrast to other groups, which showed evidence of greater male than female category specificity, bisexual men and women did not differ in category specificity. Results suggest that there are subsets of men and women who display truly bisexual patterns of sexual attraction and interest.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
Semi-parametric regression: Efficiency gains from modeling the nonparametric part
Yu, Kyusang; Park, Byeong U; 10.3150/10-BEJ296
2011-01-01
It is widely admitted that structured nonparametric modeling that circumvents the curse of dimensionality is important in nonparametric estimation. In this paper we show that the same holds for semi-parametric estimation. We argue that estimation of the parametric component of a semi-parametric model can be improved essentially when more structure is put into the nonparametric part of the model. We illustrate this for the partially linear model, and investigate efficiency gains when the nonparametric part of the model has an additive structure. We present the semi-parametric Fisher information bound for estimating the parametric part of the partially linear additive model and provide semi-parametric efficient estimators for which we use a smooth backfitting technique to deal with the additive nonparametric part. We also present the finite sample performances of the proposed estimators and analyze Boston housing data as an illustration.
Efficient Monte Carlo Methods for the Potts Model at Low Temperature
Molkaraie, Mehdi
2015-01-01
We consider the problem of estimating the partition function of the ferromagnetic $q$-state Potts model. We propose an importance sampling algorithm in the dual of the normal factor graph representing the model. The algorithm can efficiently compute an estimate of the partition function in a wide range of parameters; in particular, when the coupling parameters of the model are strong (corresponding to models at low temperature) or when the model contains a mixture of strong and weak couplings. We show that, in this setting, the proposed algorithm significantly outperforms the state of the art methods in the primal and in the dual domains.
An Application on Merton Model in the Non-efficient Market
Feng, Yanan; Xiao, Qingxian
Merton Model is one of the famous credit risk models. This model presumes that the only source of uncertainty in equity prices is the firm’s net asset value .But the above market condition holds only when the market is efficient which is often been ignored in modern research. Another, the original Merton Model is based on assumptions that in the event of default absolute priority holds, renegotiation is not permitted , liquidation of the firm is costless and in the Merton Model and most of its modified version the default boundary is assumed to be constant which don’t correspond with the reality. So these can influence the level of predictive power of the model. In this paper, we have made some extensions on some of these assumptions underlying the original model. The model is virtually a modification of Merton’s model. In a non-efficient market, we use the stock data to analysis this model. The result shows that the modified model can evaluate the credit risk well in the non-efficient market.
Models for electricity market efficiency and bidding strategy analysis
Niu, Hui
This dissertation studies models for the analysis of market efficiency and bidding behaviors of market participants in electricity markets. Simulation models are developed to estimate how transmission and operational constraints affect the competitive benchmark and market prices based on submitted bids. This research contributes to the literature in three aspects. First, transmission and operational constraints, which have been neglected in most empirical literature, are considered in the competitive benchmark estimation model. Second, the effects of operational and transmission constraints on market prices are estimated through two models based on the submitted bids of market participants. Third, these models are applied to analyze the efficiency of the Electric Reliability Council Of Texas (ERCOT) real-time energy market by simulating its operations for the time period from January 2002 to April 2003. The characteristics and available information for the ERCOT market are considered. In electricity markets, electric firms compete through both spot market bidding and bilateral contract trading. A linear asymmetric supply function equilibrium (SFE) model with transmission constraints is proposed in this dissertation to analyze the bidding strategies with forward contracts. The research contributes to the literature in several aspects. First, we combine forward contracts, transmission constraints, and multi-period strategy (an obligation for firms to bid consistently over an extended time horizon such as a day or an hour) into the linear asymmetric supply function equilibrium framework. As an ex-ante model, it can provide qualitative insights into firms' behaviors. Second, the bidding strategies related to Transmission Congestion Rights (TCRs) are discussed by interpreting TCRs as linear combination of forwards. Third, the model is a general one in the sense that there is no limitation on the number of firms and scale of the transmission network, which can have
A Computationally Efficient Aggregation Optimization Strategy of Model Predictive Control
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Model Predictive Control (MPC) is a popular technique and has been successfully used in various industrial applications. However, the big drawback of MPC involved in the formidable on-line computational effort limits its applicability to relatively slow and/or small processes with a moderate number of inputs. This paper develops an aggregation optimization strategy for MPC that can improve the computational efficiency of MPC. For the regulation problem, an input decaying aggregation optimization algorithm is presented by aggregating all the original optimized variables on control horizon with the decaying sequence in respect of the current control action.
Institute of Scientific and Technical Information of China (English)
2004-01-01
<正> Story: Show Time!The whole class presents the story"Under the Sea".Everyone is so excited and happy.Both Leo and Kathy show their parentsthe characters of the play."Who’s he?"asks Kathy’s mom."He’s the prince."Kathy replies."Who’s she?"asks Leo’s dad."She’s the queen."Leo replieswith a smile.
Institute of Scientific and Technical Information of China (English)
YIN PUMIN
2010-01-01
@@ The State Administration of Radio,Film and Television (SARFT),China's media watchdog,issued a new set of mles on June 9 that strictly regulate TV match-making shows,which have been sweeping the country's primetime programming. "Improper social and love values such as money worship should not be presented in these shows.Humiliation,verbal attacks and sex-implied vulgar content are not allowed" the new roles said.
A Traction Control Strategy with an Efficiency Model in a Distributed Driving Electric Vehicle
Lin, Cheng
2014-01-01
Both active safety and fuel economy are important issues for vehicles. This paper focuses on a traction control strategy with an efficiency model in a distributed driving electric vehicle. In emergency situation, a sliding mode control algorithm was employed to achieve antislip control through keeping the wheels' slip ratios below 20%. For general longitudinal driving cases, an efficiency model aiming at improving the fuel economy was built through an offline optimization stream within the two-dimensional design space composed of the acceleration pedal signal and the vehicle speed. The sliding mode control strategy for the joint roads and the efficiency model for the typical drive cycles were simulated. Simulation results show that the proposed driving control approach has the potential to apply to different road surfaces. It keeps the wheels' slip ratios within the stable zone and improves the fuel economy on the premise of tracking the driver's intention. PMID:25197697
Glencross, Brett D; Blyth, David; Bourne, Nicholas; Cheers, Susan; Irvin, Simon; Wade, Nicholas M
2017-02-01
This study examined the effect of including different dietary proportions of starch, protein and lipid, in diets balanced for digestible energy, on the utilisation efficiencies of dietary energy by barramundi (Lates calcarifer). Each diet was fed at one of three ration levels (satiety, 80 % of initial satiety and 60 % of initial satiety) for a 42-d period. Fish performance measures (weight gain, feed intake and feed conversion ratio) were all affected by dietary energy source. The efficiency of energy utilisation was significantly reduced in fish fed the starch diet relative to the other diets, but there were no significant effects between the other macronutrients. This reduction in efficiency of utilisation was derived from a multifactorial change in both protein and lipid utilisation. The rate of protein utilisation deteriorated as the amount of starch included in the diet increased. Lipid utilisation was most dramatically affected by inclusion levels of lipid in the diet, with diets low in lipid producing component lipid utilisation rates well above 1·3, which indicates substantial lipid synthesis from other energy sources. However, the energetic cost of lipid gain was as low as 0·65 kJ per kJ of lipid deposited, indicating that barramundi very efficiently store energy in the form of lipid, particularly from dietary starch energy. This study defines how the utilisation efficiency of dietary digestible energy by barramundi is influenced by the macronutrient source providing that energy, and that the inclusion of starch causes problems with protein utilisation in this species.
Directory of Open Access Journals (Sweden)
Comlan Hervé Sossou
2014-01-01
Full Text Available This paper analyses farmers’ credit allocation behaviors and their effects on technical efficiency. Data were collected from 476 farmers using the multistage sampling procedure. The stochastic frontier truncated-normal with conditional mean model is used to assess allocation schemes effects on technical efficiency. Tobit model reveals the impact of farmers’ sociodemographic characteristics on efficiency scores. Results reveal that farm revenue (about 2,262,566 Fcfa on average is positively correlated with land acreage, quantity of labour, and costs of fertilizers and insecticides. Farmers’ behaviors respond to six schemes which are categorized in two allocations contexts: out-farm and in-farm allocations. The model shows that only scheme (e positively impacts technical efficiency. This scheme refers to the decision to invest credit to purchase better quality of pesticides, herbicides, fertilizers, and so forth. The positive effect of the scheme (c may be significant under conditions of farmers’ education level improvement. Then, scheme (e is a better investment for all farmers, but effect of credit allocation to buy agricultural materials is positive only for educated farmers. Efficiency scores are reduced by household size and gender of the household head. Therefore a household with more than 10 members and a woman as head is likely to not be technically efficient.
A language for easy and efficient modeling of Turing machines
Institute of Scientific and Technical Information of China (English)
Pinaki Chakraborty
2007-01-01
A Turing Machine Description Language (TMDL) is developed for easy and efficient modeling of Turing machines.TMDL supports formal symbolic representation of Turing machines. The grammar for the language is also provided. Then a fast singlepass compiler is developed for TMDL. The scope of code optimization in the compiler is examined. An interpreter is used to simulate the exact behavior of the compiled Turing machines. A dynamically allocated and resizable array is used to simulate the infinite tape of a Turing machine. The procedure for simulating composite Turing machines is also explained. In this paper, two sample Turing machines have been designed in TMDL and their simulations are discussed. The TMDL can be extended to model the different variations of the standard Turing machine.
Efficiency of model selection criteria in flood frequency analysis
Calenda, G.; Volpi, E.
2009-04-01
The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The
Fruit fly optimization algorithm based high efficiency and low NOx combustion modeling for a boiler
Institute of Scientific and Technical Information of China (English)
ZHANG Zhenxing∗; SUN Baomin; XIN Jing
2014-01-01
In order to control NOx emissions and enhance boiler efficiency in coal-fired boilers,the thermal operating data from an ultra-supercritical 1 000 MW unit boiler were analyzed.On the basis of the support vector regression machine (SVM),the fruit fly optimization algorithm (FOA)was applied to optimize the penalty parameter C,ker-nel parameter g and insensitive loss coefficient of the model.Then,the FOA-SVM model was established to predict the NOx emissions and boiler efficiency,and the performance of this model was compared with that of the GA-SVM model optimized by genetic algorithm (GA).The results show the FOA-SVM model has better prediction accuracy and generalization capability,of which the maximum average relative error of testing set lies in the NOx emissions model,which is only 3 .5 9%.The above models can predict the NOx emissions and boiler efficiency accurately,so they are very suitable for on-line modeling prediction,which provides a good model foundation for further optimiza-tion operation of large capacity boilers.
Efficient Vaccine Distribution Based on a Hybrid Compartmental Model.
Directory of Open Access Journals (Sweden)
Zhiwen Yu
Full Text Available To effectively and efficiently reduce the morbidity and mortality that may be caused by outbreaks of emerging infectious diseases, it is very important for public health agencies to make informed decisions for controlling the spread of the disease. Such decisions must incorporate various kinds of intervention strategies, such as vaccinations, school closures and border restrictions. Recently, researchers have paid increased attention to searching for effective vaccine distribution strategies for reducing the effects of pandemic outbreaks when resources are limited. Most of the existing research work has been focused on how to design an effective age-structured epidemic model and to select a suitable vaccine distribution strategy to prevent the propagation of an infectious virus. Models that evaluate age structure effects are common, but models that additionally evaluate geographical effects are less common. In this paper, we propose a new SEIR (susceptible-exposed-infectious šC recovered model, named the hybrid SEIR-V model (HSEIR-V, which considers not only the dynamics of infection prevalence in several age-specific host populations, but also seeks to characterize the dynamics by which a virus spreads in various geographic districts. Several vaccination strategies such as different kinds of vaccine coverage, different vaccine releasing times and different vaccine deployment methods are incorporated into the HSEIR-V compartmental model. We also design four hybrid vaccination distribution strategies (based on population size, contact pattern matrix, infection rate and infectious risk for controlling the spread of viral infections. Based on data from the 2009-2010 H1N1 influenza epidemic, we evaluate the effectiveness of our proposed HSEIR-V model and study the effects of different types of human behaviour in responding to epidemics.
Institute of Scientific and Technical Information of China (English)
Jiannong; ZHOU; Aidong; PENG; Jing; CUI; Shuiqing; HUANG
2012-01-01
Purpose:This paper aims to compare and rank the allocative efficiency of information resources in rural areas by taking 13 rural areas in Jiangsu Province,China as the research sample.Design/methodology/approach:We designed input and output indicators for allocation of rural information resources and conducted the quantitative evaluation of allocative efficiency of rural information resources based on cross-efficiency model in combination with the classical CCR model in data envelopment analysis(DEA).Findings:Cross-efficiency DEA model can be used for our research with the objective to evaluate quantitatively and objectively whether the allocation of information resources in various rural areas is reasonable and whether the output is commensurate with the input.Research limitations:We have to give up using some indicators because of limited data availability.There is a need to further improve the cross-efficiency DEA model because it cannot identify the specific factors influencing the efficiency of decision-making units(DMUs).Practical implications:The evaluation results will help us understand the present allocative efficiency levels of information resources in various rural areas so as to provide a decisionmaking basis for formulation of the policies aimed at promoting the circulation of information resources in rural areas.Originality/value:Little or no research has been published about the allocative efficiency of rural information resources.The value of this research lies in its focus on studying rural informatization from the perspective of allocative efficiency of rural information resources and in the application of cross-efficiency DEA model to evaluate allocative efficiency of rural information resources as well.
Efficiently parallelized modeling of tightly focused, large bandwidth laser pulses
Dumont, Joey; Fillion-Gourdeau, François; Lefebvre, Catherine; Gagnon, Denis; MacLean, Steve
2017-02-01
The Stratton-Chu integral representation of electromagnetic fields is used to study the spatio-temporal properties of large bandwidth laser pulses focused by high numerical aperture mirrors. We review the formal aspects of the derivation of diffraction integrals from the Stratton-Chu representation and discuss the use of the Hadamard finite part in the derivation of the physical optics approximation. By analyzing the formulation we show that, for the specific case of a parabolic mirror, the integrands involved in the description of the reflected field near the focal spot do not possess the strong oscillations characteristic of diffraction integrals. Consequently, the integrals can be evaluated with simple and efficient quadrature methods rather than with specialized, more costly approaches. We report on the development of an efficiently parallelized algorithm that evaluates the Stratton-Chu diffraction integrals for incident fields of arbitrary temporal and spatial dependence. This method has the advantage that its input is the unfocused field coming from the laser chain, which is experimentally known with high accuracy. We use our method to show that the reflection of a linearly polarized Gaussian beam of femtosecond duration off a high numerical aperture parabolic mirror induces ellipticity in the dominant field components and generates strong longitudinal components. We also estimate that future high-power laser facilities may reach intensities of {10}24 {{W}} {{cm}}-2.
Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV
ATLAS Collaboration
2011-01-01
The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected
Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV
ATLAS Collaboration
2011-01-01
The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected
Directory of Open Access Journals (Sweden)
Xu H
2013-12-01
Full Text Available Huae Xu,1,2 Zhibo Hou,3 Hao Zhang,4 Hui Kong,2 Xiaolin Li,4 Hong Wang,2 Weiping Xie21Department of Pharmacy, 2Department of Respiratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People's Republic of China; 3First Department of Respiratory Medicine, Nanjing Chest Hospital, Nanjing, People's Republic of China; 4Department of Geriatric Gastroenterology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, People's Republic of ChinaAbstract: Earlier studies have demonstrated the promising antitumor effect of tetrandrine (Tet against a series of cancers. However, the poor solubility of Tet limits its application, while its hydrophobicity makes Tet a potential model drug for nanodelivery systems. We report on a simple way of preparing drug-loaded nanoparticles formed by amphiphilic poly(N-vinylpyrrolidone-block-poly(ε-caprolactone (PVP-b-PCL copolymers with Tet as a model drug. The mean diameters of Tet-loaded PVP-b-PCL nanoparticles (Tet-NPs were between 110 nm and 125 nm with a negative zeta potential slightly below 0 mV. Tet was incorporated into PVP-b-PCL nanoparticles with high loading efficiency. Different feeding ratios showed different influences on sizes, zeta potentials, and the drug loading efficiencies of Tet-NPs. An in vitro release study shows the sustained release pattern of Tet-NPs. It is shown that the uptake of Tet-NPs is mainly mediated by the endocytosis of nanoparticles, which is more efficient than the filtration of free Tet. Further experiments including fluorescence activated cell sorting and Western blotting indicated that this Trojan strategy of delivering Tet in PVP-b-PCL nanoparticles via endocytosis leads to enhanced induction of apoptosis in the non-small cell lung cancer cell A549 line; enhanced apoptosis is achieved by inhibiting the expression of anti-apoptotic Bcl-2 and Bcl-xL proteins. Moreover, Tet-NPs more efficiently inhibit the ability of cell migration and
Kai, Bo; Li, Runze; Zou, Hui
2011-02-01
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.
Emulation Modeling with Bayesian Networks for Efficient Decision Support
Fienen, M. N.; Masterson, J.; Plant, N. G.; Gutierrez, B. T.; Thieler, E. R.
2012-12-01
Bayesian decision networks (BDN) have long been used to provide decision support in systems that require explicit consideration of uncertainty; applications range from ecology to medical diagnostics and terrorism threat assessments. Until recently, however, few studies have applied BDNs to the study of groundwater systems. BDNs are particularly useful for representing real-world system variability by synthesizing a range of hydrogeologic situations within a single simulation. Because BDN output is cast in terms of probability—an output desired by decision makers—they explicitly incorporate the uncertainty of a system. BDNs can thus serve as a more efficient alternative to other uncertainty characterization methods such as computationally demanding Monte Carlo analyses and others methods restricted to linear model analyses. We present a unique application of a BDN to a groundwater modeling analysis of the hydrologic response of Assateague Island, Maryland to sea-level rise. Using both input and output variables of the modeled groundwater response to different sea-level (SLR) rise scenarios, the BDN predicts the probability of changes in the depth to fresh water, which exerts an important influence on physical and biological island evolution. Input variables included barrier-island width, maximum island elevation, and aquifer recharge. The variability of these inputs and their corresponding outputs are sampled along cross sections in a single model run to form an ensemble of input/output pairs. The BDN outputs, which are the posterior distributions of water table conditions for the sea-level rise scenarios, are evaluated through error analysis and cross-validation to assess both fit to training data and predictive power. The key benefit for using BDNs in groundwater modeling analyses is that they provide a method for distilling complex model results into predictions with associated uncertainty, which is useful to decision makers. Future efforts incorporate
Directory of Open Access Journals (Sweden)
Faramarz eFaghihi
2015-03-01
Full Text Available Information processing in the hippocampus begins by transferring spiking activity of the Entorhinal Cortex (EC into the Dentate Gyrus (DG. Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modelled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of neuron in the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking. This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Integer Representations towards Efficient Counting in the Bit Probe Model
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet
2011-01-01
Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst...... of the counter as the ratio between L + 1 and 2 n . We present various representations that achieve different trade-offs between the read and write complexities and the efficiency. We also give another representation of integers that uses n + O(logn ) bits to represent integers in the range [0,...,2 n − 1...
Building an Efficient Model for Afterburn Energy Release
Energy Technology Data Exchange (ETDEWEB)
Alves, S; Kuhl, A; Najjar, F; Tringe, J; McMichael, L; Glascoe, L
2012-02-03
Many explosives will release additional energy after detonation as the detonation products mix with the ambient environment. This additional energy release, referred to as afterburn, is due to combustion of undetonated fuel with ambient oxygen. While the detonation energy release occurs on a time scale of microseconds, the afterburn energy release occurs on a time scale of milliseconds with a potentially varying energy release rate depending upon the local temperature and pressure. This afterburn energy release is not accounted for in typical equations of state, such as the Jones-Wilkins-Lee (JWL) model, used for modeling the detonation of explosives. Here we construct a straightforward and efficient approach, based on experiments and theory, to account for this additional energy release in a way that is tractable for large finite element fluid-structure problems. Barometric calorimeter experiments have been executed in both nitrogen and air environments to investigate the characteristics of afterburn for C-4 and other materials. These tests, which provide pressure time histories, along with theoretical and analytical solutions provide an engineering basis for modeling afterburn with numerical hydrocodes. It is toward this end that we have constructed a modified JWL equation of state to account for afterburn effects on the response of structures to blast. The modified equation of state includes a two phase afterburn energy release to represent variations in the energy release rate and an afterburn energy cutoff to account for partial reaction of the undetonated fuel.
An efficient algorithm for corona simulation with complex chemical models
Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto
2017-05-01
The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.
Efficient algorithms for multiscale modeling in porous media
Wheeler, Mary F.
2010-09-26
We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.
Modelling and Fuzzy Control of an Efficient Swimming Ionic Polymer-metal Composite Actuated Robot
Directory of Open Access Journals (Sweden)
Qi Shen
2013-10-01
Full Text Available In this study, analytical techniques and fuzzy logic methods are applied to the dynamic modelling and efficient swimming control of a biomimetic robotic fish, which is actuated by an ionic polymer-metal composite (IPMC. A physical-based model for the biomimetic robotic fish is proposed. The model incorporates both the hydrodynamics of the IPMC tail and the actuation dynamics of the IPMC. The comparison of the results of the simulations and experiments shows the feasibility of the dynamic model. By using this model, we found that the harmonic control of the actuation frequency and voltage amplitude of the IPMC is a principal mechanism through which the robotic fish can obtain high thrust efficiency while swimming. The fuzzy control method, which is based on the knowledge of the IPMC fish’s dynamic behaviour, successfully utilized this principal mechanism. By comparing the thrust performance of the robotic fish with other control methods via simulation, we established that the fuzzy controller was able to achieve faster acceleration compared with what could be achieved with a conventional PID controller. The thrust efficiency during a steady state was superior to that with conventional control methods. We also found that when using the fuzzy control method the robotic fish can always swim near a higher actuation frequency, which could obtain both the desired speed and high thrust efficiency.
Directory of Open Access Journals (Sweden)
Fuyou Guo
2016-12-01
Full Text Available Eco-efficiency is an important sustainable development and circular economy construct that conceptualizes the relationship between industrial output, resource utilization, and environmental impacts. This paper conducts an eco-efficiency analysis for basin industrial systems using the decomposition model approach. Using data on 10 cities in China’s Songhua River basin, we illustrate the evolutionary characteristics and influencing factors of industrial systems’ eco-efficiency. The results indicate that cities in upstream and midstream areas focus on improving resource efficiency, whereas cities in downstream areas focus on improving terminal control efficiency. The results also show that the government plays an increasingly important role in promoting eco-efficiency and that significant differences in the influencing factors exist among the upstream area, midstream area, and downstream area. Our results offer deeper insights into the eco-efficiency of industrial systems and give further hints on how policy-making can help achieve sustainable development, balancing between economic activities and environmental protection.
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Blümel, Marcus; Guschlbauer, Christoph; Daun-Gruhn, Silvia; Hooper, Scott L; Büschges, Ansgar
2012-11-01
Models built using mean data can represent only a very small percentage, or none, of the population being modeled, and produce different activity than any member of it. Overcoming this "averaging" pitfall requires measuring, in single individuals in single experiments, all of the system's defining characteristics. We have developed protocols that allow all the parameters in the curves used in typical Hill-type models (passive and active force-length, series elasticity, force-activation, force-velocity) to be determined from experiments on individual stick insect muscles (Blümel et al. 2012a). A requirement for means to not well represent the population is that the population shows large variation in its defining characteristics. We therefore used these protocols to measure extensor muscle defining parameters in multiple animals. Across-animal variability in these parameters can be very large, ranging from 1.3- to 17-fold. This large variation is consistent with earlier data in which extensor muscle responses to identical motor neuron driving showed large animal-to-animal variability (Hooper et al. 2006), and suggests accurate modeling of extensor muscles requires modeling individual-by-individual. These complete characterizations of individual muscles also allowed us to test for parameter correlations. Two parameter pairs significantly co-varied, suggesting that a simpler model could as well reproduce muscle response.
Yousuf, Peerzada Y.; Ganie, Arshid H.; Khan, Ishrat; Qureshi, Mohammad I.; Ibrahim, Mohamed M.; Sarwat, Maryam; Iqbal, Muhammad; Ahmad, Altaf
2016-01-01
Carbon (C) and nitrogen (N) are two essential elements that influence plant growth and development. The C and N metabolic pathways influence each other to affect gene expression, but little is known about which genes are regulated by interaction between C and N or the mechanisms by which the pathways interact. In the present investigation, proteome analysis of N-efficient and N-inefficient Indian mustard, grown under varied combinations of low-N, sufficient-N, ambient [CO2], and elevated [CO2] was carried out to identify proteins and the encoding genes of the interactions between C and N. Two-dimensional gel electrophoresis (2-DE) revealed 158 candidate protein spots. Among these, 72 spots were identified by matrix-assisted laser desorption ionization-time of flight/time of flight mass spectrometry (MALDI-TOF/TOF). The identified proteins are related to various molecular processes including photosynthesis, energy metabolism, protein synthesis, transport and degradation, signal transduction, nitrogen metabolism and defense to oxidative, water and heat stresses. Identification of proteins like PII-like protein, cyclophilin, elongation factor-TU, oxygen-evolving enhancer protein and rubisco activase offers a peculiar overview of changes elicited by elevated [CO2], providing clues about how N-efficient cultivar of Indian mustard adapt to low N supply under elevated [CO2] conditions. This study provides new insights and novel information for a better understanding of adaptive responses to elevated [CO2] under N deficiency in Indian mustard. PMID:27524987
Yousuf, Peerzada Y; Ganie, Arshid H; Khan, Ishrat; Qureshi, Mohammad I; Ibrahim, Mohamed M; Sarwat, Maryam; Iqbal, Muhammad; Ahmad, Altaf
2016-01-01
Carbon (C) and nitrogen (N) are two essential elements that influence plant growth and development. The C and N metabolic pathways influence each other to affect gene expression, but little is known about which genes are regulated by interaction between C and N or the mechanisms by which the pathways interact. In the present investigation, proteome analysis of N-efficient and N-inefficient Indian mustard, grown under varied combinations of low-N, sufficient-N, ambient [CO2], and elevated [CO2] was carried out to identify proteins and the encoding genes of the interactions between C and N. Two-dimensional gel electrophoresis (2-DE) revealed 158 candidate protein spots. Among these, 72 spots were identified by matrix-assisted laser desorption ionization-time of flight/time of flight mass spectrometry (MALDI-TOF/TOF). The identified proteins are related to various molecular processes including photosynthesis, energy metabolism, protein synthesis, transport and degradation, signal transduction, nitrogen metabolism and defense to oxidative, water and heat stresses. Identification of proteins like PII-like protein, cyclophilin, elongation factor-TU, oxygen-evolving enhancer protein and rubisco activase offers a peculiar overview of changes elicited by elevated [CO2], providing clues about how N-efficient cultivar of Indian mustard adapt to low N supply under elevated [CO2] conditions. This study provides new insights and novel information for a better understanding of adaptive responses to elevated [CO2] under N deficiency in Indian mustard.
Directory of Open Access Journals (Sweden)
Peerjada Yasir Yousof
2016-07-01
Full Text Available Carbon (C and nitrogen (N are two essential elements that influence plant growth and development. The C and N metabolic pathways influence each other to affect gene expression, but little is known about which genes are regulated by interaction between C and N or the mechanisms by which the pathways interact. In the present investigation, proteome analysis of N-efficient and N-inefficient Indian mustard, grown under varied combinations of low-N, sufficient-N, ambient [CO2] and elevated [CO2] was carried out to identify proteins and the encoding genes of the interactions between C and N. Two-dimensional gel electrophoresis (2-DE revealed 158 candidate protein spots. Among these, 72 spots were identified by matrix-assisted laser desorption ionization-time of flight/time of flight mass spectrometry (MALDI-TOF/TOF. The identified proteins are related to various molecular processes including photosynthesis, energy metabolism, protein synthesis, transport and degradation, signal transduction, nitrogen metabolism and defense to oxidative, water and heat stresses. Identification of proteins like PII-like protein, cyclophilin, elongation factor-TU, oxygen-evolving enhancer protein and rubisco activase offers a peculiar overview of changes elicited by elevated [CO2], providing clues about how N-efficient cultivar of Indian mustard adapt to low N supply under elevated [CO2] conditions. This study provides new insights and novel information for a better understanding of adaptive responses to elevated [CO2] under N deficiency in Indian mustard.
Time efficient 3-D electromagnetic modeling on massively parallel computers
Energy Technology Data Exchange (ETDEWEB)
Alumbaugh, D.L.; Newman, G.A.
1995-08-01
A numerical modeling algorithm has been developed to simulate the electromagnetic response of a three dimensional earth to a dipole source for frequencies ranging from 100 to 100MHz. The numerical problem is formulated in terms of a frequency domain--modified vector Helmholtz equation for the scattered electric fields. The resulting differential equation is approximated using a staggered finite difference grid which results in a linear system of equations for which the matrix is sparse and complex symmetric. The system of equations is solved using a preconditioned quasi-minimum-residual method. Dirichlet boundary conditions are employed at the edges of the mesh by setting the tangential electric fields equal to zero. At frequencies less than 1MHz, normal grid stretching is employed to mitigate unwanted reflections off the grid boundaries. For frequencies greater than this, absorbing boundary conditions must be employed by making the stretching parameters of the modified vector Helmholtz equation complex which introduces loss at the boundaries. To allow for faster calculation of realistic models, the original serial version of the code has been modified to run on a massively parallel architecture. This modification involves three distinct tasks; (1) mapping the finite difference stencil to a processor stencil which allows for the necessary information to be exchanged between processors that contain adjacent nodes in the model, (2) determining the most efficient method to input the model which is accomplished by dividing the input into ``global`` and ``local`` data and then reading the two sets in differently, and (3) deciding how to output the data which is an inherently nonparallel process.
Directory of Open Access Journals (Sweden)
A.-J. Tinet
2014-07-01
Full Text Available In agricultural management, a good timing in operations such as irrigation or sowing, is essential to enhance both economical and environmental performance. To improve such timing, predictive software are of particular interest. An optimal decision making software would require process modules which provides robust, efficient and accurate predictions while being based on a minimal amount of parameters easily available. This paper develops a coupled soil–atmosphere model based on Ross fast solution for Richards' equation, heat transfer and detailed surface energy balance. In this paper, the developed model, FHAVeT (Fast Hydro Atmosphere Vegetation Temperature, has been evaluated in bare soil conditions against the coupled model based of the De Vries description, TEC. The two models were compared for different climatic and soil conditions. Moreover, the model allows the use of various pedotransfer functions. The FHAVeT model showed better performance in regards to mass balance. In order to allow a more precise comparison, 6 time windows were selected. The study demonstrated that the FHAVeT behaviour is quite similar to the TEC behaviour except under some dry conditions. An evaluation of day detection in regards to moisture thresholds is performed.
Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.
Watanabe, Leandro; Myers, Chris J
2016-08-19
The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.
Wang, Lily; Jia, Peilin; Wolfinger, Russell D; Chen, Xi; Grayson, Britney L; Aune, Thomas M; Zhao, Zhongming
2011-03-01
In genome-wide association studies (GWAS) of complex diseases, genetic variants having real but weak associations often fail to be detected at the stringent genome-wide significance level. Pathway analysis, which tests disease association with combined association signals from a group of variants in the same pathway, has become increasingly popular. However, because of the complexities in genetic data and the large sample sizes in typical GWAS, pathway analysis remains to be challenging. We propose a new statistical model for pathway analysis of GWAS. This model includes a fixed effects component that models mean disease association for a group of genes, and a random effects component that models how each gene's association with disease varies about the gene group mean, thus belongs to the class of mixed effects models. The proposed model is computationally efficient and uses only summary statistics. In addition, it corrects for the presence of overlapping genes and linkage disequilibrium (LD). Via simulated and real GWAS data, we showed our model improved power over currently available pathway analysis methods while preserving type I error rate. Furthermore, using the WTCCC Type 1 Diabetes (T1D) dataset, we demonstrated mixed model analysis identified meaningful biological processes that agreed well with previous reports on T1D. Therefore, the proposed methodology provides an efficient statistical modeling framework for systems analysis of GWAS. The software code for mixed models analysis is freely available at http://biostat.mc.vanderbilt.edu/LilyWang.
DEFF Research Database (Denmark)
Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre
2009-01-01
was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...
Replaceable Substructures for Efficient Part-Based Modeling
Liu, Han
2015-05-01
A popular mode of shape synthesis involves mixing and matching parts from different objects to form a coherent whole. The key challenge is to efficiently synthesize shape variations that are plausible, both locally and globally. A major obstacle is to assemble the objects with local consistency, i.e., all the connections between parts are valid with no dangling open connections. The combinatorial complexity of this problem limits existing methods in geometric and/or topological variations of the synthesized models. In this work, we introduce replaceable substructures as arrangements of parts that can be interchanged while ensuring boundary consistency. The consistency information is extracted from part labels and connections in the original source models. We present a polynomial time algorithm that discovers such substructures by working on a dual of the original shape graph that encodes inter-part connectivity. We demonstrate the algorithm on a range of test examples producing plausible shape variations, both from a geometric and from a topological viewpoint. © 2015 The Author(s) Computer Graphics Forum © 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Efficient computerized model for dynamic analysis of energy conversion systems
Hughes, R. D.; Lansing, F. L.; Khan, I. R.
1983-02-01
In searching for the optimum parameters that minimize the total life cycle cost of an energy conversion system, various combinations of components are examined and the resulting system performance and associated economics are studied. The systems performance and economics simulation computer program (SPECS) was developed to fill this need. The program simulates the fluid flow, thermal, and electrical characteristics of a system of components on a quasi-steady state basis for a variety of energy conversion systems. A unique approach is used in which the set of characteristic equations is solved by the Newton-Raphson technique. This approach eliminates the tedious iterative loops which are found in comparable programs such as TRNSYS or SOLTES-1. Several efficient features were also incorporated such as the centralized control and energy management scheme, and analogous treatment of energy flow in electrical and mechanical components, and the modeling of components of similar fundamental characteristics using generic subroutines. Initial tests indicate that this model can be used effectively with a relatively small number of time steps and low computer cost.
Efficient and Robust Feature Model for Visual Tracking
Institute of Scientific and Technical Information of China (English)
WANG Lu; ZHUO Qing; WANG Wenyuan
2009-01-01
Long duration visual tracking of targets is quite challenging for computer vision, because the envi-ronments may be cluttered and distracting. Illumination variations and partial occlusions are two main diffi-culties in real world visual tracking. Existing methods based on hostile appearance information cannot solve these problems effectively. This paper proposes a feature-based dynamic tracking approach that can track objects with partial occlusions and varying illumination. The method represents the tracked object by an in-variant feature model. During the tracking, a new pyramid matching algorithm was used to match the object template with the observations to determine the observation likelihood. This matching is quite efficient in calculation and the spatial constraints among these features are also embedded. Instead of complicated op-timization methods, the whole model is incorporated into a Bayesian filtering framework. The experiments on real world sequences demonstrate that the method can track objects accurately and robustly even with illu-mination variations and partial occlusions.
Tamagnone, Michele
2014-01-01
An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.
Energy Technology Data Exchange (ETDEWEB)
Tusscher, K H W J Ten; Panfilov, A V [Department of Theoretical Biology, Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands)
2006-12-07
In this paper, we formulate a model for human ventricular cells that is efficient enough for whole organ arrhythmia simulations yet detailed enough to capture the effects of cell level processes such as current blocks and channelopathies. The model is obtained from our detailed human ventricular cell model by using mathematical techniques to reduce the number of variables from 19 to nine. We carefully compare our full and reduced model at the single cell, cable and 2D tissue level and show that the reduced model has a very similar behaviour. Importantly, the new model correctly produces the effects of current blocks and channelopathies on AP and spiral wave behaviour, processes at the core of current day arrhythmia research. The new model is well over four times more efficient than the full model. We conclude that the new model can be used for efficient simulations of the effects of current changes on arrhythmias in the human heart.
Ten Tusscher, K. H. W. J.; Panfilov, A. V.
2006-12-01
In this paper, we formulate a model for human ventricular cells that is efficient enough for whole organ arrhythmia simulations yet detailed enough to capture the effects of cell level processes such as current blocks and channelopathies. The model is obtained from our detailed human ventricular cell model by using mathematical techniques to reduce the number of variables from 19 to nine. We carefully compare our full and reduced model at the single cell, cable and 2D tissue level and show that the reduced model has a very similar behaviour. Importantly, the new model correctly produces the effects of current blocks and channelopathies on AP and spiral wave behaviour, processes at the core of current day arrhythmia research. The new model is well over four times more efficient than the full model. We conclude that the new model can be used for efficient simulations of the effects of current changes on arrhythmias in the human heart.
Efficient modelling of droplet dynamics on complex surfaces
Karapetsas, George; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.
2016-03-01
This work investigates the dynamics of droplet interaction with smooth or structured solid surfaces using a novel sharp-interface scheme which allows the efficient modelling of multiple dynamic contact lines. The liquid-gas and liquid-solid interfaces are treated in a unified context and the dynamic contact angle emerges simply due to the combined action of the disjoining and capillary pressure, and viscous stresses without the need of an explicit boundary condition or any requirement for the predefinition of the number and position of the contact lines. The latter, as it is shown, renders the model able to handle interfacial flows with topological changes, e.g. in the case of an impinging droplet on a structured surface. Then it is possible to predict, depending on the impact velocity, whether the droplet will fully or partially impregnate the structures of the solid, or will result in a ‘fakir’, i.e. suspended, state. In the case of a droplet sliding on an inclined substrate, we also demonstrate the built-in capability of our model to provide a prediction for either static or dynamic contact angle hysteresis. We focus our study on hydrophobic surfaces and examine the effect of the geometrical characteristics of the solid surface. It is shown that the presence of air inclusions trapped in the micro-structure of a hydrophobic substrate (Cassie-Baxter state) result in the decrease of contact angle hysteresis and in the increase of the droplet migration velocity in agreement with experimental observations for super-hydrophobic surfaces. Moreover, we perform 3D simulations which are in line with the 2D ones regarding the droplet mobility and also indicate that the contact angle hysteresis may be significantly affected by the directionality of the structures with respect to the droplet motion.
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
Modeling and analyzing of nuclear power peer review on enterprise operational efficiency
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Based on the practice and analysis of peer review in nuclear power plants, the models on the Pareto improvement of peer review, governance entropy decrease of peer review are set up and discussed. The result shows that the peer review of nuclear power is actually a process of Pareto improvement, and of governance entropy decrease. It's a process of improvement of the enterprise operational efficiency accordingly.
Directory of Open Access Journals (Sweden)
Ma Zheshu
2009-01-01
Full Text Available Indirectly or externally-fired gas-turbines (IFGT or EFGT are novel technology under development for small and medium scale combined power and heat supplies in combination with micro gas turbine technologies mainly for the utilization of the waste heat from the turbine in a recuperative process and the possibility of burning biomass or 'dirty' fuel by employing a high temperature heat exchanger to avoid the combustion gases passing through the turbine. In this paper, by assuming that all fluid friction losses in the compressor and turbine are quantified by a corresponding isentropic efficiency and all global irreversibilities in the high temperature heat exchanger are taken into account by an effective efficiency, a one dimensional model including power output and cycle efficiency formulation is derived for a class of real IFGT cycles. To illustrate and analyze the effect of operational parameters on IFGT efficiency, detailed numerical analysis and figures are produced. The results summarized by figures show that IFGT cycles are most efficient under low compression ratio ranges (3.0-6.0 and fit for low power output circumstances integrating with micro gas turbine technology. The model derived can be used to analyze and forecast performance of real IFGT configurations.
An efficient visual saliency detection model based on Ripplet transform
Indian Academy of Sciences (India)
A DIANA ANDRUSHIA; R THANGARAJAN
2017-05-01
Even though there have been great advancements in computer vision tasks, the development of human visual attention models is still not well investigated. In day-to-day life, one can find ample applications of saliency detection in image and video processing. This paper presents an efficient visual saliency detectionmodel based on Ripplet transform, which aims at detecting the salient region and achieving higher Receiver Operating Characteristics (ROC). Initially the feature maps are obtained from Ripplet transform in different scales and different directions of the image. The global and local saliency maps are computed based on the global probability density distribution and feature distribution of local areas, which are combined together to get the final saliency map. Ripplet-transform-based visual saliency detection is the novel approach carried out in this paper. Experimental results indicate that the proposed method based on Ripplet transformation can give excellent performance in terms of precision, recall, F measure and Mean Absolute Error (MAE), and is compared with 10 state-of-the-art methods on five benchmark datasets.
Modeling of Glass Making Processes for Improved Efficiency
Energy Technology Data Exchange (ETDEWEB)
Thomas P. Seward III
2003-03-31
The overall goal of this project was to develop a high-temperature melt properties database with sufficient reliability to allow mathematical modeling of glass melting and forming processes for improved product quality, improved efficiency and lessened environmental impact. It was initiated by the United States glass industry through the NSF Industry/University Center for Glass Research (CGR) at Alfred University [1]. Because of their important commercial value, six different types/families of glass were studied: container, float, fiberglass (E- and wool-types), low-expansion borosilicate, and color TV panel glasses. CGR member companies supplied production-quality glass from all six families upon which we measured, as a function of temperature in the molten state, density, surface tension, viscosity, electrical resistivity, infrared transmittance (to determine high temperature radiative conductivity), non-Newtonian flow behavior, and oxygen partial pres sure. With CGR cost sharing, we also studied gas solubility and diffusivity in each of these glasses. Because knowledge of the compositional dependencies of melt viscosity and electrical resistivity are extremely important for glass melting furnace design and operation, these properties were studied more fully. Composition variations were statistically designed for all six types/families of glass. About 140 different glasses were then melted on a laboratory scale and their viscosity and electrical resistivity measured as a function of temperature. The measurements were completed in February 2003 and are reported on here. The next steps will be (1) to statistically analyze the compositional dependencies of viscosity and electrical resistivity and develop composition-property response surfaces, (2) submit all the data to CGR member companies to evaluate the usefulness in their models, and (3) publish the results in technical journals and most likely in book form.
Parallel processing for efficient 3D slope stability modelling
Marchesini, Ivan; Mergili, Martin; Alvioli, Massimiliano; Metz, Markus; Schneider-Muntau, Barbara; Rossi, Mauro; Guzzetti, Fausto
2014-05-01
We test the performance of the GIS-based, three-dimensional slope stability model r.slope.stability. The model was developed as a C- and python-based raster module of the GRASS GIS software. It considers the three-dimensional geometry of the sliding surface, adopting a modification of the model proposed by Hovland (1977), and revised and extended by Xie and co-workers (2006). Given a terrain elevation map and a set of relevant thematic layers, the model evaluates the stability of slopes for a large number of randomly selected potential slip surfaces, ellipsoidal or truncated in shape. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a value of the factor of safety, FS. For each pixel, the minimum value of FS and the depth of the associated slip surface are stored. This information is used to obtain a spatial overview of the potentially unstable slopes in the study area. We test the model in the Collazzone area, Umbria, central Italy, an area known to be susceptible to landslides of different type and size. Availability of a comprehensive and detailed landslide inventory map allowed for a critical evaluation of the model results. The r.slope.stability code automatically splits the study area into a defined number of tiles, with proper overlap in order to provide the same statistical significance for the entire study area. The tiles are then processed in parallel by a given number of processors, exploiting a multi-purpose computing environment at CNR IRPI, Perugia. The map of the FS is obtained collecting the individual results, taking the minimum values on the overlapping cells. This procedure significantly reduces the processing time. We show how the gain in terms of processing time depends on the tile dimensions and on the number of cores.
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
Directory of Open Access Journals (Sweden)
Hepburn Iain
2012-05-01
Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates
Efficiency and reliability enhancements in propulsion flowfield modeling
Buelow, Philip E. O.; Venkateswaran, Sankaran; Merkle, Charles L.
1993-01-01
The implementation of traditional CFD algorithms in practical propulsion related flowfields often leads to dramatic reductions in efficiency and/or robustness. The present research is directed at understanding the reasons for this deterioration and finding methods to circumvent it. Work to date has focussed on low Mach number regions, viscous dominated regions, and high grid aspect ratios. Time derivative preconditioning, improved definition of the local time stepping, and appropriate application of boundary conditions are employed to decrease the required time to obtain a solution, while maintaining accuracy. A number of cases having features typical of rocket engine flowfields are computed to demonstrate the improvement over conventional methods. These cases include laminar and turbulent high Reynolds number flat plate boundary layers, flow over a backward-facing step, a diffusion flame, and wall heat-flux calculations in a turbulent converging-diverging nozzle. Results from these cases show convergence that is virtually independent of the local Mach number and the grid aspect ratio, which translates to a convergence speed-up of up to several orders of magnitude over conventional algorithms. Current emphasis is in extending these results to three-dimensional flows with highly stretched grids.
An efficient climate model with water isotope physics: NEEMY
Hu, J.; Emile-Geay, J.
2015-12-01
This work describes the development of an isotope-enabled atmosphere-ocean global climate model, NEEMY. This is a model of intermediate complexity, which can run 100 model years in 30 hours using 33 CPUs. The atmospheric component is the SPEEDY-IER (Molteni et al. 2003; Dee et al. 2015a), which is a water isotope-enabled (with equilibrium and kinetic fractionation schemes in precipitation, evaporation and soil moisture) simplified atmospheric general circulation model, with T30 horizontal resolution and 8 vertical layers. The oceanic component is NEMO 3.4 (Madec 2008), a state-of-the-art oceanic model (~2° horizontal resolution and 31 vertical layers) with an oceanic isotope module (a passive tracer scheme). A 1000-year control run shows that NEEMY is stable and its energy is conserved. The mean state is comparable to that of CMIP3-era CGCMs, though much cheaper to run. Atmospheric teleconnections such as the NAO and PNA are simulated very well. NEEMY also simulates the oceanic meridional overturning circulation well. The tropical climate variability is weaker than observations, and the climatology exhibits a double ITCZ problem despite bias corrections. The standard deviation of the monthly mean Nino3.4 index is 0.61K, compared to 0.91K in observations (Reynolds et al. 2002). We document similarities and differences with a close cousin, SPEEDY-NEMO (Kucharski et al. 2015). With its fast speed and relatively complete physical processes, NEEMY is suitable for paleoclimate studies ; we will present some forced simulations of the past millennium and their use in forward-modeling climate proxies, via proxy system models (PSMs, Dee et al 2015b). References Dee, S., D. Noone, N. Buenning, J. Emile-Geay, and Y. Zhou, 2015a: SPEEDY-IER: A fast atmospheric GCM with water isotope physics. J. Geophys. Res. Atmos., 120: 73-91. doi:10.1002/2014JD022194. Dee, S. G., J. Emile-Geay, M. N. Evans, Allam, A., D. M. Thompson, and E. J. Steig, 2015b: PRYSM: an open-source framework
Soto, María José; Fernández-Pascual, Mercedes; Sanjuan, Juan; Olivares, José
2002-01-01
Swarming is a form of bacterial translocation that involves cell differentiation and is characterized by a rapid and co-ordinated population migration across solid surfaces. We have isolated a Tn5 mutant of Sinorhizobium meliloti GR4 showing conditional swarming. Swarm cells from the mutant strain QS77 induced on semi-solid minimal medium in response to different signals are hyperflagellated and about twice as long as wild-type cells. Genetic and physiological characterization of the mutant strain indicates that QS77 is altered in a gene encoding a homologue of the FadD protein (long-chain fatty acyl-CoA ligase) of several microorganisms. Interestingly and similar to a less virulent Xanthomonas campestris fadD(rpfB) mutant, QS77 is impaired in establishing an association with its host plant. In trans expression of multicopy fadD restored growth on oleate, control of motility and the symbiotic phenotype of QS77, as well as acyl-CoA synthetase activity of an Escherichia coli fadD mutant. The S. meliloti QS77 strain shows a reduction in nod gene expression as well as a differential regulation of motility genes in response to environmental conditions. These data suggest that, in S. meliloti, fatty acid derivatives may act as intracellular signals controlling motility and symbiotic performance through gene expression.
Tian, Yan; Yoo, Jina H
2015-01-01
This study investigates audience responses to health-related reality TV shows in the setting of The Biggest Loser. It conceptualizes a model for audience members' parasocial interaction and identification with cast members and explores antecedents and outcomes of parasocial interaction and identification. Data analysis suggests the following direct relationships: (1) audience members' exposure to the show is positively associated with parasocial interaction, which in turn is positively associated with identification, (2) parasocial interaction is positively associated with exercise self-efficacy, whereas identification is negatively associated with exercise self-efficacy, and (3) exercise self-efficacy is positively associated with exercise behavior. Indirect effects of parasocial interaction and identification on exercise self-efficacy and exercise behavior are also significant. We discuss the theoretical and practical implications of these findings.
The efficient global primitive equation climate model SPEEDO V2.0
Directory of Open Access Journals (Sweden)
C. A. Severijns
2010-02-01
Full Text Available The efficient primitive-equation coupled atmosphere-ocean model SPEEDO V2.0 is presented. The model includes an interactive sea-ice and land component. SPEEDO is a global earth system model of intermediate complexity. It has a horizontal resolution of T30 (triangular truncation at wave number 30 and 8 vertical layers in the atmosphere, and a horizontal resolution of 2 degrees and 20 levels in the ocean. The parameterisations in SPEEDO are developed in such a way that it is a fast model suitable for large ensembles or long runs (of O(10^{4} years on a typical current workstation. The model has no flux correction. We compare the mean state and inter-annual variability of the model with observational fields of the atmosphere and ocean. In particular the atmospheric circulation, the mid-latitude patterns of variability and teleconnections from the tropics are well simulated. To show the capabilities of the model, we performed a long control run and an ensemble experiment with enhanced greenhouse gases. The long control run shows that the model is stable. CO_{2} doubling and future climate change scenario experiments show a climate sensitivity of 1.84 K W^{-1} m^{2}, which is within the range of state-of-the-art climate models. The spatial response patterns are comparable to state-of-the-art, higher resolution models. However, for very high greenhouse gas concentrations the parameterisations are not valid. We conclude that the model is suitable for past, current and future climate simulations and for exploring wide parameter ranges and mechanisms of variability. However, as with any model, users should be careful when using the model beyond the range of physical realism of the parameterisations and model setup.
Energy Technology Data Exchange (ETDEWEB)
Wellstein, J.
2009-07-01
This article summarizes the results of two research projects conducted at the University of Applied Sciences in Lucerne, Switzerland and supported by the Swiss Federal Office of Energy. Following the rapid development of the heat pump market for space heating in the past decades in Switzerland, the improvement of the heat pump coefficient of performance COP becomes a highly relevant issue. The problem is especially acute for air/water heat pumps, due to the fact that the COP significantly decreases when the outdoor air temperature decreases. However, laboratory measurements show that the COP could be much higher at intermediate heat load if less irreversibility in the thermodynamic cycle would occur. The Lucerne researchers have identified the exergy loss origins. Continuous heat power modulation instead of ON/OFF control of the heat pump compressor is one way for COP improvement. Another need for optimization was found in the construction of the finned lamellar heat exchanger used as evaporator. This can significantly reduce fin icing and, consequently, the power consumption during deicing.
Directory of Open Access Journals (Sweden)
Mika F. Asaba
2013-09-01
Full Text Available Restless Legs Syndrome (RLS is a prevalent but poorly understood disorder that ischaracterized by uncontrollable movements during sleep, resulting in sleep disturbance.Olfactory memory in Drosophila melanogaster has proven to be a useful tool for the study ofcognitive deficits caused by sleep disturbances, such as those seen in RLS. A recently generatedDrosophila model of RLS exhibited disturbed sleep patterns similar to those seen in humans withRLS. This research seeks to improve understanding of the relationship between cognitivefunctioning and sleep disturbances in a new model for RLS. Here, we tested learning andmemory in wild type and dBTBD9 mutant flies by Pavlovian olfactory conditioning, duringwhich a shock was paired with one of two odors. Flies were then placed in a T-maze with oneodor on either side, and successful associative learning was recorded when the flies chose theside with the unpaired odor. We hypothesized that due to disrupted sleep patterns, dBTBD9mutant flies would be unable to learn the shock-odor association. However, the current studyreports that the recently generated Drosophila model of RLS shows successful olfactorylearning, despite disturbed sleep patterns, with learning performance levels matching or betterthan wild type flies.
Efficient material flow in mixed model assembly lines.
Alnahhal, Mohammed; Noche, Bernd
2013-01-01
In this study, material flow from decentralized supermarkets to stations in mixed model assembly lines using tow (tugger) trains is investigated. Train routing, scheduling, and loading problems are investigated in parallel to minimize the number of trains, variability in loading and in routes lengths, and line-side inventory holding costs. The general framework for solving these problems in parallel contains analytical equations, Dynamic Programming (DP), and Mixed Integer Programming (MIP). Matlab in conjunction with LP-solve software was used to formulate the problem. An example was presented to explain the idea. Results which were obtained in very short CPU time showed the effect of using time buffer among routes on the feasible space and on the optimal solution. Results also showed the effect of the objective, concerning reducing the variability in loading, on the results of routing, scheduling, and loading. Moreover, results showed the importance of considering the maximum line-side inventory beside the capacity of the train in the same time in finding the optimal solution.
Directory of Open Access Journals (Sweden)
Hossein Jafari Mansoorian
2017-01-01
Full Text Available Background & Aims of the Study: A feed forward artificial neural network (FFANN was developed to predict the efficiency of total petroleum hydrocarbon (TPH removal from a contaminated soil, using soil washing process with Tween 80. The main objective of this study was to assess the performance of developed FFANN model for the estimation of TPH removal. Materials and Methods: Several independent repressors including pH, shaking speed, surfactant concentration and contact time were used to describe the removal of TPH as a dependent variable in a FFANN model. 85% of data set observations were used for training the model and remaining 15% were used for model testing, approximately. The performance of the model was compared with linear regression and assessed, using Root of Mean Square Error (RMSE as goodness-of-fit measure Results: For the prediction of TPH removal efficiency, a FANN model with a three-hidden-layer structure of 4-3-1 and a learning rate of 0.01 showed the best predictive results. The RMSE and R2 for the training and testing steps of the model were obtained to be 2.596, 0.966, 10.70 and 0.78, respectively. Conclusion: For about 80% of the TPH removal efficiency can be described by the assessed regressors the developed model. Thus, focusing on the optimization of soil washing process regarding to shaking speed, contact time, surfactant concentration and pH can improve the TPH removal performance from polluted soils. The results of this study could be the basis for the application of FANN for the assessment of soil washing process and the control of petroleum hydrocarbon emission into the environments.
Factors influencing community health centers' efficiency: a latent growth curve modeling approach.
Marathe, Shriram; Wan, Thomas T H; Zhang, Jackie; Sherin, Kevin
2007-10-01
The objective of study is to examine factors affecting the variation in technical and cost efficiency of community health centers (CHCs). A panel study design was formulated to examine the relationships among the contextual, organizational structural, and performance variables. Data Envelopment Analysis (DEA) of technical efficiency and latent growth curve modeling of multi-wave technical and cost efficiency were performed. Regardless of the efficiency measures, CHC efficiency was influenced more by contextual factors than organizational structural factors. The study confirms the independent and additive influences of contextual and organizational predictors on efficiency. The change in CHC technical efficiency positively affects the change in CHC cost efficiency. The practical implication of this finding is that healthcare managers can simultaneously optimize both technical and cost efficiency through appropriate use of inputs to generate optimal outputs. An innovative solution is to employ decision support software to prepare an expert system to assist poorly performing CHCs to achieve better cost efficiency through optimizing technical efficiency.
Directory of Open Access Journals (Sweden)
Marcin Cebula
Full Text Available The liver has the ability to prime immune responses against neo antigens provided upon infections. However, T cell immunity in liver is uniquely modulated by the complex tolerogenic property of this organ that has to also cope with foreign agents such as endotoxins or food antigens. In this respect, the nature of intrahepatic T cell responses remains to be fully characterized. To gain deeper insight into the mechanisms that regulate the CD8+ T cell responses in the liver, we established a novel OVA_X_CreER(T2 mouse model. Upon tamoxifen administration OVA antigen expression is observed in a fraction of hepatocytes, resulting in a mosaic expression pattern. To elucidate the cross-talk of CD8+ T cells with antigen-expressing hepatocytes, we adoptively transferred K(b/OVA257-264-specific OT-I T cells to OVA_X_CreER(T2 mice or generated triple transgenic OVA_X CreER(T2_X_OT-I mice. OT-I T cells become activated in OVA_X_CreER(T2 mice and induce an acute and transient hepatitis accompanied by liver damage. In OVA_X_CreER(T2_X_OT-I mice, OVA induction triggers an OT-I T cell mediated, fulminant hepatitis resulting in 50% mortality. Surviving mice manifest a long lasting hepatitis, and recover after 9 weeks. In these experimental settings, recovery from hepatitis correlates with a complete loss of OVA expression indicating efficient clearance of the antigen-expressing hepatocytes. Moreover, a relapse of hepatitis can be induced upon re-induction of cured OVA_X_CreER(T2_X_OT-I mice indicating absence of tolerogenic mechanisms. This pathogen-free, conditional mouse model has the advantage of tamoxifen inducible tissue specific antigen expression that reflects the heterogeneity of viral antigen expression and enables the study of intrahepatic immune responses to both de novo and persistent antigen. It allows following the course of intrahepatic immune responses: initiation, the acute phase and antigen clearance.
EEE Model for Evaluation of ERP Efficiency in Real Time Systems
Maha Attia Hana,; Mohamed Marie
2014-01-01
This study is designed to measure the efficiency of ERP systems in providing real time information. In this study, the research measures the efficiency rather than the performance-used in previous researches - as it is more comprehensive. The proposed ERP efficiency evaluation model depends on ERP phases'. EEE model measures the efficiency of implementation phase, post-implementation phase, and the impact of implementation phase on post implementation from technical perspectiv...
Efficient scatter model for simulation of ultrasound images from computed tomography data
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.
2005-06-01
This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.
Experimental Characterization and Modelling of Energy Efficient Fluid Supply Systems
Bjorklund, Karina M; Vacca, Andrea; Opperwall, Timothy J
2015-01-01
In applications such as in agriculture, construction, and aerospace applications, high pressure hydraulics is the preferred technology to transmit mechanical power. As a consequence, the energy efficiency of the hydraulic system used to perform the mechanical actuations is of primary concern to reduce the energy consumptions in the abovementioned applications. In an hydraulic system, the primary component determining the energy efficiency is the hydraulic pump. This work focuses on the study ...
A production scheduling simulation model for improving production efficiency
Directory of Open Access Journals (Sweden)
Cheng-Liang Yang
2014-12-01
Full Text Available A real manufacturing system of an electronic company was mimicked by using a simulation model. The effects of dispatching rules and resources allocations on performance measures were explored. The results indicated that the dispatching rules of shortest processing time (SPT and earliest due date are superior to the current rule of first in first out adopted by the company. A new combined rule, the smallest quotient of dividing shortest remaining processing time (SRPT by SPT (SRPT/SPT_Min, has been proposed and demonstrated the best performance on mean tardiness time under the current resources situation. The results also showed that using fewer resources can increase their utilization, but it increases the risk of delivery tardiness as well, which in turn will damage the organization’s reputation in the long run. Some suggestions for future work were presented.
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
Barr, J.G.; Engel, V.; Fuentes, J.D.; Fuller, D.O.; Kwon, H.
2013-01-01
Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC) systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE), and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information about
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
Directory of Open Access Journals (Sweden)
J. G. Barr
2013-03-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
The Virtual Continuous TEG Model: Efficient Optimization of Thermogenerators
Kitte, J.; Beck, F.; Jänsch, D.
2013-07-01
Dimensioning a thermoelectric generator for vehicle applications poses major challenges. Besides the fundamental process of determining the layout, an optimization procedure is necessary to harness the maximum potential from a thermoelectric system under given boundary conditions. The thermal boundary conditions encountered in this application are not constant. In this context, a multichannel thermogenerator shows benefits by distributing individual mass flows in relation to the operating point maximizing power output across the entire range of operating points. The innovative approach underlying the continuous thermogenerator model supports the process of global optimization. The parameters to be optimized are configured as dimensionless variables. The model not only guarantees very short computation times but also maintains high quality. The optimization method is presented in detail using an example of searching for an optimum material layout, variable fin geometry, and variable leg height across and along the direction of gas flow. The materials or material combinations to be analyzed are lead and bismuth telluride. The heat exchanger has a reference geometry. The article describes the combination of dimensionless optimization parameters that provides the greatest increase in thermoelectric power output compared with the basic concept. The discussion concludes with a cost-benefit analysis of the measures chosen.
Directory of Open Access Journals (Sweden)
Patrick Aldrin-Kirk
Full Text Available Synucleinopathies, characterized by intracellular aggregation of α-synuclein protein, share a number of features in pathology and disease progression. However, the vulnerable cell population differs significantly between the disorders, despite being caused by the same protein. While the vulnerability of dopamine cells in the substantia nigra to α-synuclein over-expression, and its link to Parkinson's disease, is well studied, animal models recapitulating the cortical degeneration in dementia with Lewy-bodies (DLB are much less mature. The aim of this study was to develop a first rat model of widespread progressive synucleinopathy throughout the forebrain using adeno-associated viral (AAV vector mediated gene delivery. Through bilateral injection of an AAV6 vector expressing human wild-type α-synuclein into the forebrain of neonatal rats, we were able to achieve widespread, robust α-synuclein expression with preferential expression in the frontal cortex. These animals displayed a progressive emergence of hyper-locomotion and dysregulated response to the dopaminergic agonist apomorphine. The animals receiving the α-synuclein vector displayed significant α-synuclein pathology including intra-cellular inclusion bodies, axonal pathology and elevated levels of phosphorylated α-synuclein, accompanied by significant loss of cortical neurons and a progressive reduction in both cortical and striatal ChAT positive interneurons. Furthermore, we found evidence of α-synuclein sequestered by IBA-1 positive microglia, which was coupled with a distinct change in morphology. In areas of most prominent pathology, the total α-synuclein levels were increased to, on average, two-fold, which is similar to the levels observed in patients with SNCA gene triplication, associated with cortical Lewy body pathology. This study provides a novel rat model of progressive cortical synucleinopathy, showing for the first time that cholinergic interneurons are vulnerable
Li, Zhimin; Qiu, Yushi; Personett, David; Huang, Peng; Edenfield, Brandy; Katz, Jason; Babusis, Darius; Tang, Yang; Shirely, Michael A; Moghaddam, Mehran F; Copland, John A; Tun, Han W
2013-01-01
Primary CNS lymphoma carries a poor prognosis. Novel therapeutic agents are urgently needed. Pomalidomide (POM) is a novel immunomodulatory drug with anti-lymphoma activity. CNS pharmacokinetic analysis was performed in rats to assess the CNS penetration of POM. Preclinical evaluation of POM was performed in two murine models to assess its therapeutic activity against CNS lymphoma. The impact of POM on the CNS lymphoma immune microenvironment was evaluated by immunohistochemistry and immunofluorescence. In vitro cell culture experiments were carried out to further investigate the impact of POM on the biology of macrophages. POM crosses the blood brain barrier with CNS penetration of ~ 39%. Preclinical evaluations showed that it had significant therapeutic activity against CNS lymphoma with significant reduction in tumor growth rate and prolongation of survival, that it had a major impact on the tumor microenvironment with an increase in macrophages and natural killer cells, and that it decreased M2-polarized tumor-associated macrophages and increased M1-polarized macrophages when macrophages were evaluated based on polarization status. In vitro studies using various macrophage models showed that POM converted the polarization status of IL4-stimulated macrophages from M2 to M1, that M2 to M1 conversion by POM in the polarization status of lymphoma-associated macrophages is dependent on the presence of NK cells, that POM induced M2 to M1 conversion in the polarization of macrophages by inactivating STAT6 signaling and activating STAT1 signaling, and that POM functionally increased the phagocytic activity of macrophages. Based on our findings, POM is a promising therapeutic agent for CNS lymphoma with excellent CNS penetration, significant preclinical therapeutic activity, and a major impact on the tumor microenvironment. It can induce significant biological changes in tumor-associated macrophages, which likely play a major role in its therapeutic activity against CNS
Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Christian Appold
2010-06-01
Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.
Directory of Open Access Journals (Sweden)
Sie Long Kek
2015-01-01
Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.
Directory of Open Access Journals (Sweden)
Virginie Desestret
Full Text Available THE INFLAMMATORY RESPONSE FOLLOWING ISCHEMIC STROKE IS DOMINATED BY INNATE IMMUNE CELLS: resident microglia and blood-derived macrophages. The ambivalent role of these cells in stroke outcome might be explained in part by the acquisition of distinct functional phenotypes: classically (M1 and alternatively activated (M2 macrophages. To shed light on the crosstalk between hypoxic neurons and macrophages, an in vitro model was set up in which bone marrow-derived macrophages were co-cultured with hippocampal slices subjected to oxygen and glucose deprivation. The results showed that macrophages provided potent protection against neuron cell loss through a paracrine mechanism, and that they expressed M2-type alternative polarization. These findings raised the possibility of using bone marrow-derived M2 macrophages in cellular therapy for stroke. Therefore, 2 million M2 macrophages (or vehicle were intravenously administered during the subacute stage of ischemia (D4 in a model of transient middle cerebral artery occlusion. Functional neuroscores and magnetic resonance imaging endpoints (infarct volumes, blood-brain barrier integrity, phagocytic activity assessed by iron oxide uptake were longitudinally monitored for 2 weeks. This cell-based treatment did not significantly improve any outcome measure compared with vehicle, suggesting that this strategy is not relevant to stroke therapy.
An analytical model for the influence of contact resistance on thermoelectric efficiency
Bjørk, R
2016-01-01
An analytical model is presented that can account for both electrical and hot and cold thermal contact resistances when calculating the efficiency of a thermoelectric generator. The model is compared to a numerical model of a thermoelectric leg, for 16 different thermoelectric materials, as well as the analytical models of Ebling et. al. (2010) and Min \\& Rowe (1992). The model presented here is shown to accurately calculate the efficiency for all systems and all contact resistances considered, with an average difference in efficiency between the numerical model and the analytical model of $-0.07\\pm0.35$ pp. This makes the model more accurate than previously published models. The maximum absolute difference in efficiency between the analytical model and the numerical model is 1.14 pp for all materials and all contact resistances considered.
Directory of Open Access Journals (Sweden)
Wenwen Cai
2014-09-01
Full Text Available Terrestrial gross primary production (GPP is the largest global CO2 flux and determines other ecosystem carbon cycle variables. Light use efficiency (LUE models may have the most potential to adequately address the spatial and temporal dynamics of GPP, but recent studies have shown large model differences in GPP simulations. In this study, we investigated the GPP differences in the spatial and temporal patterns derived from seven widely used LUE models at the global scale. The result shows that the global annual GPP estimates over the period 2000–2010 varied from 95.10 to 139.71 Pg C∙yr−1 among models. The spatial and temporal variation of global GPP differs substantially between models, due to different model structures and dominant environmental drivers. In almost all models, water availability dominates the interannual variability of GPP over large vegetated areas. Solar radiation and air temperature are not the primary controlling factors for interannual variability of global GPP estimates for most models. The disagreement among the current LUE models highlights the need for further model improvement to quantify the global carbon cycle.
Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang
2017-08-01
Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.
A Bioeconomic Foundation for the Nutrition-based Efficiency Wage Model
DEFF Research Database (Denmark)
Dalgaard, Carl-Johan Lars; Strulik, Holger
Drawing on recent research on allometric scaling and energy consumption, the present paper develops a nutrition-based efficiency wage model from first principles. The biologically micro-founded model allows us to address empirical criticism of the original nutrition-based efficiency wage model. B...
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set of ...... Carlo experiment. We find that estimation of the parameters in the transition function can be problematic but that there may be significant benefits in terms of forecast performance....... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...
Studies Show Curricular Efficiency Can Be Attained.
Walberg, Herbert J.
1987-01-01
Reviews the nine factors contributing to educational productivity, the effectiveness of instructional techniques (mastery learning ranks high and Skinnerian reinforcement has the largest overall effect), and the effects of psychological enviroments on learning. Includes references and a table. (MD)
Efficient Focusing Models for Generation of Freak Waves
Institute of Scientific and Technical Information of China (English)
ZHAO Xi-zeng; SUN Zhao-chen; LIANG Shu-xiu
2009-01-01
Four focusing models for generation of freak waves are presented. An extreme wave focusing model is presented on the basis of the enhanced High-Order Spectral (HOS) method and the importance of the nonlinear wave-wave interaction is evaluated by comparison of the calculated results with experimental and theoretical data. Based on the modification of the Longuet-Higgins model, four wave models for generation of freak waves (a. Extreme wave model + random wave model; b. Extreme wave model + regular wave model; c. Phase interval modulation wave focusing model; d. Number modulation wave focusing model with the same phase) are proposed. By use of different energy distribution techniques in the four models, freak wave events are obtained with different H_(max)/H_s in finite space and time.
Development of a computationally efficient urban modeling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Murla, Damian; Ntegeka, Victor
2016-01-01
This paper presents a parsimonious and data-driven modelling approach to simulate urban floods. Flood levels simulated by detailed 1D-2D hydrodynamic models can be emulated using the presented conceptual modelling approach with a very short calculation time. In addition, the model detail can be a...
Efficient modeling of photonic crystals with local Hermite polynomials
Boucher, C. R.; Li, Zehao; Albrecht, J. D.; Ram-Mohan, L. R.
2014-04-01
Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (plane wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.
Assessment of Energy Efficient and Model Based Control
2017-06-15
and paid a price in energy consumption for it. Likewise, in configuration 14 the choice of the energy-efficient planner simply paid off better...both in energy saved and in avoiding barrel collisions. In either case, the odd choice of path could have been influenced by perceptual errors or by
Modeling the irradiance dependency of the quantum efficiency of potosynthesis
Silsbe, G.M.; Kromkamp, J.C.
2012-01-01
Measures of the quantum efficiency of photosynthesis (phi(PSII)) across an irradiance (E) gradient are an increasingly common physiological assay and alternative to traditional photosynthetic-irradiance (PE) assays. Routinely, the analysis and interpretation of these data are analogous to PE measure
Efficient parameterization of cardiac action potential models using a genetic algorithm
Cairns, Darby I.; Fenton, Flavio H.; Cherry, E. M.
2017-09-01
Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.
Directory of Open Access Journals (Sweden)
Dimitrios Kourkoutas
2009-04-01
Full Text Available Dimitrios Kourkoutas1,2, Gerasimos Georgopoulos1, Antonios Maragos1, et al1Department of Ophthalmology, Medical School, Athens University, Athens, Greece; 2Department of Ophthalmology, 417 Hellenic Army Shared Fund Hospital, Athens, GreecePurpose: In this paper a new nonlinear multivariable regression method is presented in order to investigate the relationship between the central corneal thickness (CCT and the Heidelberg Retina Tomograph (HRTII optic nerve head (ONH topographic measurements, in patients with established glaucoma.Methods: Forty nine eyes of 49 patients with glaucoma were included in this study. Inclusion criteria were patients with (a HRT II ONH imaging of good quality (SD 30 < μm, (b reliable Humphrey visual field tests (30-2 program, and (c bilateral CCT measurements with ultrasonic contact pachymetry. Patients were classified as glaucomatous based on visual field and/or ONH damage. The relationship between CCT and topographic parameters was analyzed by using the new nonlinear multivariable regression model.Results: In the entire group, CCT was 549.78 ± 33.08 μm (range: 484–636 μm; intraocular pressure (IOP was 16.4 ± 2.67 mmHg (range: 11–23 mmHg; MD was −3.80 ± 4.97 dB (range: 4.04 – [−20.4] dB; refraction was −0.78 ± 2.46 D (range: −6.0 D to +3.0 D. The new nonlinear multivariable regression model we used indicated that CCT was significantly related (R2 = 0.227, p < 0.01 with rim volume nasally and type of diagnosis.Conclusions: By using the new nonlinear multivariable regression model, in patients with established glaucoma, our data showed that there is a statistically significant correlation between CCT and HRTII ONH structural measurements, in glaucoma patients.Keywords: central corneal thickness, glaucoma, optic nerve head, HRT
Time Use Efficiency and the Five-Factor Model of Personality.
Kelly, William E.; Johnson, Judith L.
2005-01-01
To investigate the relationship between self-perceived time use efficiency and the five-factor model of personality, 105 university students were administered the Time Use Efficiency Scale (TUES; Kelly, 2003) and Saucier's Big-Five Mini-Markers (Saucier, 1994). The results indicated that time use efficiency was strongly, positively related to…
Gu, Fei; Preacher, Kristopher J; Wu, Wei; Yung, Yiu-Fai
2014-01-01
Although the state space approach for estimating multilevel regression models has been well established for decades in the time series literature, it does not receive much attention from educational and psychological researchers. In this article, we (a) introduce the state space approach for estimating multilevel regression models and (b) extend the state space approach for estimating multilevel factor models. A brief outline of the state space formulation is provided and then state space forms for univariate and multivariate multilevel regression models, and a multilevel confirmatory factor model, are illustrated. The utility of the state space approach is demonstrated with either a simulated or real example for each multilevel model. It is concluded that the results from the state space approach are essentially identical to those from specialized multilevel regression modeling and structural equation modeling software. More importantly, the state space approach offers researchers a computationally more efficient alternative to fit multilevel regression models with a large number of Level 1 units within each Level 2 unit or a large number of observations on each subject in a longitudinal study.
Particle Capture Efficiency in a Multi-Wire Model for High Gradient Magnetic Separation
Eisenträger, Almut; Griffiths, Ian M
2014-01-01
High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles, removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle's entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separa...
Ellison, Donald C; Bykov, Andrei M
2015-01-01
We include a general form for the scattering mean free path in a nonlinear Monte Carlo model of relativistic shock formation and Fermi acceleration. Particle-in-cell (PIC) simulations, as well as analytic work, suggest that relativistic shocks tend to produce short-scale, self-generated magnetic turbulence that leads to a scattering mean free path (mfp) with a stronger momentum dependence than the mfp ~ p dependence for Bohm diffusion. In unmagnetized shocks, this turbulence is strong enough to dominate the background magnetic field so the shock can be treated as parallel regardless of the initial magnetic field orientation, making application to gamma-ray bursts (GRBs), pulsar winds, Type Ibc supernovae, and extra-galactic radio sources more straightforward and realistic. In addition to changing the scale of the shock precursor, we show that, when nonlinear effects from efficient Fermi acceleration are taken into account, the momentum dependence of the mfp has an important influence on the efficiency of cosm...
Energy Technology Data Exchange (ETDEWEB)
Assaf, A. George [Isenberg School of Management, University of Massachusetts-Amherst, 90 Campus Center Way, Amherst 01002 (United States); Barros, Carlos Pestana [Instituto Superior de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Managi, Shunsuke [Graduate School of Environmental Studies, Tohoku University, 6-6-20 Aramaki-Aza Aoba, Aoba-Ku, Sendai 980-8579 (Japan)
2011-04-15
This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO{sub 2} emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results. (author)
An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping
Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare
2017-04-01
represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.
Efficient Modeling for Short Channel MOS Circuit Simulation.
1982-08-01
of Conpube Science and Engineering Key words and phrases: MOS Trasistor Modeling. Numerical Optimization. None Parameter Estimation. sacunrI... current - voltage characteristics of MOS transistors. Although capacitances and their model parameters have been omitted for simplicity, there is no...constructing a circuit model of the MOS field-effect transistor. The model is nothing more than a set of equations which predicts the device’s current -voltage
Development of a dc Motor Model and an Actuator Efficiency Model
Energy Technology Data Exchange (ETDEWEB)
Watkins, John Clifford; Mc Kellar, Michael George; DeWall, Kevin George
2001-07-01
For the past several years, researchers at the Idaho National Engineering and Environmental Laboratory, under the sponsorship of the U.S. Nuclear Regulatory Commission, have been investigating the ability of motor-operated valves (MOVs) used in Nuclear Power Plants to close or open when subjected to design basis flow and pressure loads. Part of this research addresses the response of a dcpowered motor-operated gate valve to assess whether it will achieve flow isolation and to evaluate whether it will slow down excessively under design-basis conditions and thus fail to achieve the required stroke time. As part of this research, we have developed a model of a dc motor operating under load and a model of actuator efficiency under load based on a first principle evaluation of the equipment. These models include the effect that reduced voltage at the Motor Control Center and elevated containment temperatures have on the performance of a dc powered MOV. The model also accounts for motor torque and speed changes that result from the heatup of the motor during the stroke. These models are part of the Motor- Operated Valve In Site Test Assessment (MISTA) software which is capable of independently evaluating the ability of dc-powered motoroperated gate valves to achieve flow isolation and to meet required stroke times under design-basis conditions. This paper presents an overview of the dc motor model and the actuator efficiency under load model. The paper then compares the analytical results from the models with the results of actual dc motor and actuator testing, including comparisons of the effect reduced voltage, elevated containment temperature, and motor heating during the stroke have on an MOV.
Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail
2016-11-14
We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.
Efficient Modelling, Generation and Analysis of Markov Automata
Timmer, Mark; Iwama, K.
2014-01-01
Quantitative model checking is concerned with the verification of both quantitative and qualitative properties over models incorporating quantitative information. Increases in expressivity of the models involved allow more types of systems to be analysed, but also raise the difficulty of their effic
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
Solving a class of geometric programming problems by an efficient dynamic model
Nazemi, Alireza; Sharifi, Elahe
2013-03-01
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve geometric programming (GP) problems. The main idea is to convert the GP problem into an equivalent convex optimization problem. A neural network model is then constructed for solving the obtained convex programming problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.
Development of a thermal resistance model to evaluate wellbore heat exchange efficiency
Directory of Open Access Journals (Sweden)
Albert A. Koenig, Martin F. Helmke
2014-01-01
Full Text Available A new model is proposed to simulate conduction of heat between a pipe loop in a geoexchange system and the ground. The approach employs the thermal resistor technique coupled with a conduction shape factor modified by an occultation factor. The model is compared to available data and demonstrates suitable agreement with previous studies. The model facilitates a parametric study of borehole resistance as a function of geometry and thermal conductivity of the components. By spacing the legs of the loop against the borehole and increasing the pipe size, the study shows that one can maximize the wellbore heat transfer using a moderate (1.2 W/mK thermal conductivity grout. This study further demonstrates that improved well construction techniques could increase the efficiency of most closed-loop geothermal systems by 10 percent.
Reliability and efficiency of generalized rumor spreading model on complex social networks
Naimi, Yaghoob
2013-01-01
We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks. Despite pervious rumor models that both the spreader-spreader ($SS$) and the spreader-stifler ($SR$) interactions have the same rate $\\alpha$, we define $\\alpha^{(1)}$ and $\\alpha^{(2)}$ for $SS$ and $SR$ interactions, respectively. The effect of variation of $\\alpha^{(1)}$ and $\\alpha^{(2)}$ on the final density of stiflers is investigated. Furthermore, the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency. Our results show that while networks with homogeneous connectivity patterns reach a higher reliability, scale-free topologies need a less time to reach a steady state with respect the rumor.
Reliability and Efficiency of Generalized Rumor Spreading Model on Complex Social Networks
Institute of Scientific and Technical Information of China (English)
Yaghoob Naimi; Mohammad Naimi
2013-01-01
We introduce the generalized rumor spreading model and investigate some properties of this model on different complex social networks.Despite pervious rumor models that both the spreader-spreader (SS) and the spreaderstifler (SR) interactions have the same rate α,we define α(1) and α(2) for SS and SR interactions,respectively.The effect of variation of α(1) and α(2) on the final density of stiflers is investigated.Furthermore,the influence of the topological structure of the network in rumor spreading is studied by analyzing the behavior of several global parameters such as reliability and efficiency.Our results show that while networks with homogeneous connectivity patterns reach a higher reliability,scale-free topologies need a less time to reach a steady state with respect the rumor.
An Efficient Role Specification Management Model for Highly Distributed Environments
Directory of Open Access Journals (Sweden)
Soomi Yang
2006-07-01
Full Text Available Highly distributed environments such as pervasive computing environments not having global or broad control, need another attribute certificate management technique. For an efficient role based access control using attribute certificate, we use a technique of structuring role specification certificates. It can provide more flexible and secure collaborating environments. The roles are grouped and made them into the relation tree. It can reduce management cost and overhead incurred when changing the specification of the role. Further we use caching of frequently used role specification certificate for better performance in case applying the role. Tree structured role specification results secure and efficient role renewing and distribution. Caching of role specification helps an application of role. In order to be scalable distribution of the role specification certificate, we use multicasting packets. Also, performance enhancement of structuring role specification certificates is quantified in the sense of taking into account of the packet loss. In the experimental section, it is shown that role updating and distribution are secured and efficient.
Directory of Open Access Journals (Sweden)
Yu’e Wu
2015-01-01
Full Text Available This study was to establish a systemic C. parapsilosis infection model in immunosuppressed ICR mice induced by cyclophosphamide and evaluate the antifungal efficiency of fluconazole. Three experiments were set to confirm the optimal infectious dose of C. parapsilosis, outcomes of infectious model, and antifungal efficiency of fluconazole in vivo, respectively. In the first experiment, comparisons of survival proportions between different infectious doses treated groups showed that the optimal inoculum for C. parapsilosis was 0.9 × 105 CFU per mouse. The following experiment was set to observe the outcomes of infection at a dose of 0.9 × 105 CFU C. parapsilosis. Postmortem and histopathological examinations presented fugal-specific lesions in multiorgans, especially in kidneys, characterized by inflammation, numerous microabscesses, and fungal infiltration. The CFU counts were consistent with the histopathological changes in tissues. Th1/Th2 cytokine imbalance was observed with increases of proinflammatory cytokines and no responses of anti-inflammatory cytokines in sera and kidneys. In the last experiment, model based evaluation of fluconazole indicated that there were ideal antifungal activities for fluconazole at dosages of 10–50 mg/kg/d. Data demonstrates that the research team has established a systemic C. parapsilosis infection model in immunosuppressed ICR mice, affording opportunities for increasing our understanding of fungal pathogenesis and treatment.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Directory of Open Access Journals (Sweden)
Thanh Dinh
2016-06-01
Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.
Dinh, Thanh; Kim, Younghan
2016-06-28
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Dinh, Thanh; Kim, Younghan
2016-01-01
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689
Basic model study on efficiency evaluation in collaborative design work process
Institute of Scientific and Technical Information of China (English)
XIE Qiu; YANG Yu; LI Xiaoli; ZHAO Ningyu
2007-01-01
During the efficiency evaluation process of collaborative design work,because of the lack of efficiency evaluation models,a basic analytical model for collaborative design work efficiency evaluation is proposed in this paper.First,the characteristics of the networked collaborative design system work process were studied; then,in accordance with those characteristics,a basic analytical model is created.This model,which is built for centralized collaborative design work,includes an analytical frame,a process view model,a function view model and an information view model.Finally,the application process and steps of this basic analytical model are elaborated when used for efficiency evaluation through an experiment.
On the significance of the Nash-Sutcliffe efficiency measure for event-based flood models
Moussa, Roger
2010-05-01
When modelling flood events, the important challenge that awaits the modeller is first to choose a rainfall-runoff model, then to calibrate a set of parameters that can accurately simulate a number of flood events and related hydrograph shapes, and finally to evaluate the model performance separately on each event using multi-criteria functions. This study analyses the significance of the Nash-Sutcliffe efficiency (NSE) and proposes a new method to assess the performance of flood event models (see Moussa, 2010, "When monstrosity can be beautiful while normality can be ugly : assessing the performance of event-based-flood-models", Hydrological Science Journal, in press). We focus on the specific cases of events difficult to model and characterized by low NSE values, which we call "monsters". The properties of the NSE were analysed as a function of the calculated hydrograph shape and of the benchmark reference model. As application case, a multi-criteria analysis method to assess the model performance on each event is proposed and applied on the Gardon d'Anduze catchment. This paper discusses first the significance of the well-known Nash-Sutcliffe efficiency (NSE) criteria function when calculated separately on flood events. The NSE is a convenient and normalized measure of model performance, but does not provide a reliable basis for comparing the results of different case studies. We show that simulated hydrographs with low or negative values of NSE, called "monsters", can be due solely to a simple lag translation or a homothetic ratio of the observed hydrograph which reproduces the dynamic of the hydrograph, with acceptable errors on other criteria. In the opposite, results show that simulations with a NSE close to 1 can become "monsters" and give very low values (even negative) of the criteria function G, if the average observed discharged used as a benchmark reference model in the NSE is modified. This paper argues that the definition of an appropriate benchmark
An efficient method for solving fractional Hodgkin-Huxley model
Nagy, A. M.; Sweilam, N. H.
2014-06-01
In this paper, we present an accurate numerical method for solving fractional Hodgkin-Huxley model. A non-standard finite difference method (NSFDM) is implemented to study the dynamic behaviors of the proposed model. The Grünwald-Letinkov definition is used to approximate the fractional derivatives. Numerical results are presented graphically reveal that NSFDM is easy to implement, effective and convenient for solving the proposed model.
Directory of Open Access Journals (Sweden)
Stojek Jerzy
1997-12-01
Full Text Available The article presents some considerations on the effect of the assumed mathematical models of the pump efficiency and the hydraulic engine, exerted upon the static and dynamic properties of a hydrostatic transmission. For this purpose some simulation tests of the transmission described have been carried out with two models: one - simplified, containing efficiency constants, and the other - an extended one with various efficiency values.
Wang, Hui
2014-05-01
This thesis addresses the efficiency improvement of seismic wave modeling and migration in anisotropic media. This improvement becomes crucial in practice as the process of imaging complex geological structures of the Earth\\'s subsurface requires modeling and migration as building blocks. The challenge comes from two aspects. First, the underlying governing equations for seismic wave propagation in anisotropic media are far more complicated than that in isotropic media which demand higher computational costs to solve. Second, the usage of whole prestack seismic data still remains a burden considering its storage volume and the existing wave equation solvers. In this thesis, I develop two approaches to tackle the challenges. In the first part, I adopt the concept of prestack exploding reflector model to handle the whole prestack data and bridge the data space directly to image space in a single kernel. I formulate the extrapolation operator in a two-way fashion to remove he restriction on directions that waves propagate. I also develop a generic method for phase velocity evaluation within anisotropic media used in this extrapolation kernel. The proposed method provides a tool for generating prestack images without wavefield cross correlations. In the second part of this thesis, I approximate the anisotropic models using effective isotropic models. The wave phenomena in these effective models match that in anisotropic models both kinematically and dynamically. I obtain the effective models through equating eikonal equations and transport equations of anisotropic and isotropic models, thereby in the high frequency asymptotic approximation sense. The wavefields extrapolation costs are thus reduced using isotropic wave equation solvers while the anisotropic effects are maintained through this approach. I benchmark the two proposed methods using synthetic datasets. Tests on anisotropic Marmousi model and anisotropic BP2007 model demonstrate the applicability of my
2013-06-11
... COMMISSION Compass Efficient Model Portfolios, LLC and Compass EMP Funds Trust; Notice of Application June 4.... Applicants: Compass Efficient Model Portfolios, LLC (the ``Adviser'') and Compass EMP Funds Trust (``the... perspective of the investor, the role of the Subadvisers is comparable to that of the individual...
An Analytical Model for the Influence of Contact Resistance on Thermoelectric Efficiency
DEFF Research Database (Denmark)
Bjørk, Rasmus
2016-01-01
as to the analytical models of Ebling et al. (J Electron Mater 39:1376, 2010) and Min and Rowe (J Power Sour 38:253, 1992). The model presented here is shown to accurately calculate the efficiency for all systems and all contact resistances considered, with an average difference in efficiency between the numerical...
Mathematical modelling as basis for efficient enterprise management
Directory of Open Access Journals (Sweden)
Kalmykova Svetlana
2017-01-01
Full Text Available The choice of the most effective HR- management style at the enterprise is based on modeling various socio-economic situations. The article describes the formalization of the managing processes aimed at the interaction between the allocated management subsystems. The mathematical modelling tools are used to determine the time spent on recruiting personnel for key positions in the management hierarchy selection.
An Efficient MIP Model for Locomotive Scheduling with Time Windows
Aronsson, Martin; Kreuger, Per; Gjerdrum, Jonatan
2006-01-01
This paper presents an IP model for a vehicle routing and scheduling problem from the domain of freight railways. The problem is non-capacitated but allows non-binary integer flows of vehicles between transports with departure times variable within fixed intervals. The model has been developed with and has found practical use at Green Cargo, the largest freight rail operator in Sweden.
Development of a computationally efficient urban flood modelling approach
DEFF Research Database (Denmark)
Wolfs, Vincent; Ntegeka, Victor; Murla, Damian
the developed methodology, a case study for the city of Ghent in Belgium is elaborated. The configured conceptual model mimics the flood levels of a detailed 1D-2D hydrodynamic InfoWorks ICM model accurately, while the calculation time is an order of magnitude of 106 times shorter than the original highly...
Industrial Sector Energy Efficiency Modeling (ISEEM) Framework Documentation
Energy Technology Data Exchange (ETDEWEB)
Karali, Nihan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Xu, Tengfang [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sathaye, Jayant [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2012-12-12
The goal of this study is to develop a new bottom-up industry sector energy-modeling framework with an agenda of addressing least cost regional and global carbon reduction strategies, improving the capabilities and limitations of the existing models that allows trading across regions and countries as an alternative.
Tests of control in the Audit Risk Model : Effective? Efficient?
Blokdijk, J.H. (Hans)
2004-01-01
Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes ser
Tests of control in the Audit Risk Model : Effective? Efficient?
Blokdijk, J.H. (Hans)
2004-01-01
Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes ser
Tests of control in the Audit Risk Model : Effective? Efficient?
Blokdijk, J.H. (Hans)
2004-01-01
Lately, the Audit Risk Model has been subject to criticism. To gauge its validity, this paper confronts the Audit Risk Model as incorporated in International Standard on Auditing No. 400, with the real life situations faced by auditors in auditing financial statements. This confrontation exposes
Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.
2014-04-01
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
Algorithm that Executes Submodels of a Mathematical Model for Greater Efficiency
Directory of Open Access Journals (Sweden)
Bauer-Mengelberg Juan Ricardo
2015-09-01
Full Text Available The topic arises in an informing service called FLAG (cash flow in agribusinesses that offers its clients mathematical models consisting of his own variables as well as others from his business environment. FLAG obtains updated values of the latter with the frequency determined by the client, and computes the values of the other variables, thus providing the client with the impact of the changes in the indicators he includes in his model. The methods used to increase the efficiency of the calculations of a mathematical model containing a number of formulae through which the values of its variables are computed are described. It consists of operations to be performed on its operands, variables of the model. The need to reduce processing times results from the processing of several models, where the total duration is limited by constrains of the system that invokes such executions. The algorithms are built to minimize input-output operation. Additionally, whenever the model is invoked by a change of a single variable, only the submodel, consisting of the variables that were directly or indirectly affected by the change, are calculated. Executions of models prepared to confirm the efficacy of the improvements show reductions of up to 90%.
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.
Efficient modelling of gravity effects due to topographic masses using the Gauss-FFT method
Wu, Leyuan
2016-04-01
We present efficient Fourier-domain algorithms for modelling gravity effects due to topographic masses. The well-known Parker's formula originally based on the standard fast Fourier transform (FFT) algorithm is modified by applying the Gauss-FFT method instead. Numerical precision of the forward and inverse Fourier transforms embedded in Parker's formula and its extended forms are significantly improved by the Gauss-FFT method. The topographic model is composed of two major aspects, the geometry and the density. Versatile geometric representations, including the mass line model, the mass prism model, the polyhedron model and smoother topographic models interpolated from discrete data sets using high-order splines or pre-defined by analytical functions, in combination with density distributions that vary both laterally and vertically in rather arbitrary ways following exponential or general polynomial functions, now can be treated in a consistent framework by applying the Gauss-FFT method. The method presented has been numerically checked by space-domain analytical and hybrid analytical/numerical solutions already established in the literature. Synthetic and real model tests show that both the Gauss-FFT method and the standard FFT method run much faster than space-domain solutions, with the Gauss-FFT method being superior in numerical accuracy. When truncation errors are negligible, the Gauss-FFT method can provide forward results almost identical to space-domain analytical or semi-numerical solutions in much less time.
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model. PMID:28120889
A new approach for estimating the efficiencies of the nucleotide substitution models.
Som, Anup
2007-04-01
In this article, a new approach is presented for estimating the efficiencies of the nucleotide substitution models in a four-taxon case and then this approach is used to estimate the relative efficiencies of six substitution models under a wide variety of conditions. In this approach, efficiencies of the models are estimated by using a simple probability distribution theory. To assess the accuracy of the new approach, efficiencies of the models are also estimated by using the direct estimation method. Simulation results from the direct estimation method confirmed that the new approach is highly accurate. The success of the new approach opens a unique opportunity to develop analytical methods for estimating the relative efficiencies of the substitution models in a straightforward way.
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol
Validation of an Efficient Outdoor Sound Propagation Model Using BEM
DEFF Research Database (Denmark)
Quirós-Alpera, S.; Henriquez, Vicente Cutanda; Jacobsen, Finn
2001-01-01
An approximate, simple and practical model for prediction of outdoor sound propagation exists based on ray theory, diffraction theory and Fresnel-zone considerations [1]. This model, which can predict sound propagation over non-flat terrain, has been validated for combinations of flat ground, hills...... and barriers, but it still needs to be validated for configurations that involve combinations of valleys and barriers. In order to do this a boundary element model has been implemented in MATLAB to serve as a reliable reference....
An Efficient Null Model for Conformational Fluctuations in Proteins
DEFF Research Database (Denmark)
Harder, Tim Philipp; Borg, Mikael; Bottaro, Sandro
2012-01-01
limited to comparatively short timescales. TYPHON is a probabilistic method to explore the conformational space of proteins under the guidance of a sophisticated probabilistic model of local structure and a given set of restraints that represent nonlocal interactions, such as hydrogen bonds or disulfide...... bridges. The choice of the restraints themselves is heuristic, but the resulting probabilistic model is well-defined and rigorous. Conceptually, TYPHON constitutes a null model of conformational fluctuations under a given set of restraints. We demonstrate that TYPHON can provide information...
Ellison, Donald C.; Warren, Donald C.; Bykov, Andrei M.
2016-03-01
We include a general form for the scattering mean free path, λmfp(p), in a nonlinear Monte Carlo model of relativistic shock formation and Fermi acceleration. Particle-in-cell simulations, as well as analytic work, suggest that relativistic shocks tend to produce short-scale, self-generated magnetic turbulence that leads to a scattering mean free path with a stronger momentum dependence than the λmfp ∝ p dependence for Bohm diffusion. In unmagnetized shocks, this turbulence is strong enough to dominate the background magnetic field so the shock can be treated as parallel regardless of the initial magnetic field orientation, making application to γ-ray bursts, pulsar winds, type Ibc supernovae, and extragalactic radio sources more straightforward and realistic. In addition to changing the scale of the shock precursor, we show that, when nonlinear effects from efficient Fermi acceleration are taken into account, the momentum dependence of λmfp(p) has an important influence on the efficiency of cosmic ray production as well as the accelerated particle spectral shape. These effects are absent in non-relativistic shocks and do not appear in relativistic shock models unless nonlinear effects are self-consistently described. We show, for limited examples, how the changes in Fermi acceleration translate to changes in the intensity and spectral shape of γ-ray emission from proton-proton interactions and pion-decay radiation.
Min, Ari; Park, Chang Gi; Scott, Linda D
2016-05-23
Data envelopment analysis (DEA) is an advantageous non-parametric technique for evaluating relative efficiency of performance. This article describes use of DEA to estimate technical efficiency of nursing care and demonstrates the benefits of using multilevel modeling to identify characteristics of efficient facilities in the second stage of analysis. Data were drawn from LTCFocUS.org, a secondary database including nursing home data from the Online Survey Certification and Reporting System and Minimum Data Set. In this example, 2,267 non-hospital-based nursing homes were evaluated. Use of DEA with nurse staffing levels as inputs and quality of care as outputs allowed estimation of the relative technical efficiency of nursing care in these facilities. In the second stage, multilevel modeling was applied to identify organizational factors contributing to technical efficiency. Use of multilevel modeling avoided biased estimation of findings for nested data and provided comprehensive information on differences in technical efficiency among counties and states.
EEE Model for Evaluation of ERP Efficiency in Real Time Systems
Directory of Open Access Journals (Sweden)
Maha Attia Hana,
2014-02-01
Full Text Available This study is designed to measure the efficiency of ERP systems in providing real time information. In this study, the research measures the efficiency rather than the performance-used in previous researches - as it is more comprehensive. The proposed ERP efficiency evaluation model depends on ERP phases'. EEE model measures the efficiency of implementation phase, post-implementation phase, and the impact of implementation phase on post implementation from technical perspective. A case study, a survey and a proposed experimental method are used to implement research model. Results indicate a significant relationship between ERP Phases'. In order to get an efficient post implementation phase, the implementation phase must be efficient.
Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software
Energy Technology Data Exchange (ETDEWEB)
Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael
2013-09-06
This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.
System convergence in transport models: algorithms efficiency and output uncertainty
DEFF Research Database (Denmark)
Rich, Jeppe; Nielsen, Otto Anker
2015-01-01
much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...... in the MSA process. In connection to DNTM it is shown that MSA works well when applied to travel-time averaging, whereas trip averaging is generally infected by random noise resulting from the assignment model. The latter implies that the minimum uncertainty in the final model output is dictated...
Non-linear models: coal combustion efficiency and emissions control
Energy Technology Data Exchange (ETDEWEB)
Bulsari, A.; Wemberg, A.; Anttila, A.; Multas, A. [Nonlinear Solutions Oy, Turku (Finland)
2009-04-15
Today's power plants feel the pressure to limit their NOx emissions and improve their production economics. The article describes how nonlinear models are effective for process guidance of various kinds of processes, including coal fired boilers. These models were developed for the Naantati 2 boiler at the electricity and heat generating coal-fired plant in Naantali, near Turku, Finland. 4 refs., 6 figs.
Integration Strategies for Efficient Multizone Chemical Kinetics Models
Energy Technology Data Exchange (ETDEWEB)
McNenly, M J; Havstad, M A; Aceves, S M; Pitz, W J
2009-10-15
Three integration strategies are developed and tested for the stiff, ordinary differential equation (ODE) integrators used to solve the fully coupled multizone chemical kinetics model. Two of the strategies tested are found to provide more than an order of magnitude of improvement over the original, basic level of usage for the stiff ODE solver. One of the faster strategies uses a decoupled, or segregated, multizone model to generate an approximate Jacobian. This approach yields a 35-fold reduction in the computational cost for a 20 zone model. Using the same approximate Jacobian as a preconditioner for an iterative Krylov-type linear system solver, the second improved strategy achieves a 75-fold reduction in the computational cost for a 20 zone model. The faster strategies achieve their cost savings with no significant loss of accuracy. The pressure, temperature and major species mass fractions agree with the solution from the original integration approach to within six significant digits; and the radical mass fractions agree with the original solution to within four significant digits. The faster strategies effectively change the cost scaling of the multizone model from cubic to quadratic, with respect to the number of zones. As a consequence of the improved scaling, the 40 zone model offers more than a 250-fold cost savings over the basic calculation.
Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media
Waheed, Umair bin
2014-05-01
Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.
An efficient positive potential-density pair expansion for modelling galaxies
Rojas-Niño, A.; Read, J. I.; Aguilar, L.; Delorme, M.
2016-07-01
We present a novel positive potential-density pair expansion for modelling galaxies, based on the Miyamoto-Nagai disc. By using three sets of such discs, each one of them aligned along each symmetry axis, we are able to reconstruct a broad range of potentials that correspond to density profiles from exponential discs to 3D power-law models with varying triaxiality (henceforth simply `twisted' models). We increase the efficiency of our expansion by allowing the scalelength parameter of each disc to be negative. We show that, for suitable priors on the scalelength and scaleheight parameters, these `MNn discs' (Miyamoto-Nagai negative) have just one negative density minimum. This allows us to ensure global positivity by demanding that the total density at the global minimum is positive. We find that at better than 10 per cent accuracy in our density reconstruction, we can represent a radial and vertical exponential disc over 0.1-10 scalelengths/scaleheights with four MNn discs; a Navarro, Frenk and White (NFW) profile over 0.1-10 scalelengths with four MNn discs; and a twisted triaxial NFW profile with three MNn discs per symmetry axis. Our expansion is efficient, fully analytic, and well suited to reproducing the density distribution and gravitational potential of galaxies from discs to ellipsoids.
Memory efficient atmospheric effects modeling for infrared scene generators
Kavak, Çaǧlar; Özsaraç, Seçkin
2015-05-01
The infrared (IR) energy radiated from any source passes through the atmosphere before reaching the sensor. As a result, the total signature captured by the IR sensor is significantly modified by the atmospheric effects. The dominant physical quantities that constitute the mentioned atmospheric effects are the atmospheric transmittance and the atmospheric path radiance. The incoming IR radiation is attenuated by the transmittance and path radiance is added on top of the attenuated radiation. In IR scene simulations OpenGL is widely used for rendering purposes. In the literature there are studies, which model the atmospheric effects in an IR band using OpenGLs exponential fog model as suggested by Beers law. In the standard pipeline of OpenGL, the related fog model needs single equivalent OpenGL variables for the transmittance and path radiance, which actually depend on both the distance between the source and the sensor and also on the wavelength of interest. However, in the conditions where the range dependency cannot be modeled as an exponential function, it is not accurate to replace the atmospheric quantities with a single parameter. The introduction of OpenGL Shading Language (GLSL) has enabled the developers to use the GPU more flexible. In this paper, a novel method is proposed for the atmospheric effects modeling using the least squares estimation with polynomial fitting by programmable OpenGL shader programs built with GLSL. In this context, a radiative transfer model code is used to obtain the transmittance and path radiance data. Then, polynomial fits are computed for the range dependency of these variables. Hence, the atmospheric effects model data that will be uploaded in the GPU memory is significantly reduced. Moreover, the error because of fitting is negligible as long as narrow IR bands are used.
Energy Technology Data Exchange (ETDEWEB)
Alonso Guerreiro, A.
2008-07-01
At the same time that energy demand grows faster than the investments in electrical installations, the older capacity is reaching the end of its useful life. The need of running all those capacity without interruptions and an efficient maintenance of its assets, are the two current key points for power generation, transmission and distribution systems. This paper tries to show the reader a model of management which makes possible an effective management of assets with a strict control cost, and which includes those key points, centred at predictive techniques, involving all the departments of the organization and which goes further on considering the maintenance like a simple reparation or substitution of broken down units. Therefore, it becomes precise a model with three basic lines: supply guarantee, quality service and competitively, in order to allow the companies to reach the current demands which characterize the power supply. (Author) 5 refs.
An efficient approach for shadow detection based on Gaussian mixture model
Institute of Scientific and Technical Information of China (English)
韩延祥; 张志胜; 陈芳; 陈恺
2014-01-01
An efficient approach was proposed for discriminating shadows from moving objects. In the background subtraction stage, moving objects were extracted. Then, the initial classification for moving shadow pixels and foreground object pixels was performed by using color invariant features. In the shadow model learning stage, instead of a single Gaussian distribution, it was assumed that the density function computed on the values of chromaticity difference or bright difference, can be modeled as a mixture of Gaussian consisting of two density functions. Meanwhile, the Gaussian parameter estimation was performed by using EM algorithm. The estimates were used to obtain shadow mask according to two constraints. Finally, experiments were carried out. The visual experiment results confirm the effectiveness of proposed method. Quantitative results in terms of the shadow detection rate and the shadow discrimination rate (the maximum values are 85.79%and 97.56%, respectively) show that the proposed approach achieves a satisfying result with post-processing step.
Directory of Open Access Journals (Sweden)
Wolfgang Witteveen
2014-01-01
Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.
Zimmerling, Jörn; Wei, Lei; Urbach, Paul; Remis, Rob
2016-06-01
In this paper we present a Krylov subspace model-order reduction technique for time- and frequency-domain electromagnetic wave fields in linear dispersive media. Starting point is a self-consistent first-order form of Maxwell's equations and the constitutive relation. This form is discretized on a standard staggered Yee grid, while the extension to infinity is modeled via a recently developed global complex scaling method. By applying this scaling method, the time- or frequency-domain electromagnetic wave field can be computed via a so-called stability-corrected wave function. Since this function cannot be computed directly due to the large order of the discretized Maxwell system matrix, Krylov subspace reduced-order models are constructed that approximate this wave function. We show that the system matrix exhibits a particular physics-based symmetry relation that allows us to efficiently construct the time- and frequency-domain reduced-order models via a Lanczos-type reduction algorithm. The frequency-domain models allow for frequency sweeps meaning that a single model provides field approximations for all frequencies of interest and dominant field modes can easily be determined as well. Numerical experiments for two- and three-dimensional configurations illustrate the performance of the proposed reduction method.
An empirical investigation of the efficiency effects of integrated care models in Switzerland
Directory of Open Access Journals (Sweden)
Oliver Reich
2012-01-01
Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency. Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model. Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection. Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.
An empirical investigation of the efficiency effects of integrated care models in Switzerland
Directory of Open Access Journals (Sweden)
Oliver Reich
2012-01-01
Full Text Available Introduction: This study investigates the efficiency gains of integrated care models in Switzerland, since these models are regarded as cost containment options in national social health insurance. These plans generate much lower average health care expenditure than the basic insurance plan. The question is, however, to what extent these total savings are due to the effects of selection and efficiency.Methods: The empirical analysis is based on data from 399,274 Swiss residents that constantly had compulsory health insurance with the Helsana Group, the largest health insurer in Switzerland, covering the years 2006 to 2009. In order to evaluate the efficiency of the different integrated care models, we apply an econometric approach with a mixed-effects model.Results: Our estimations indicate that the efficiency effects of integrated care models on health care expenditure are significant. However, the different insurance plans vary, revealing the following efficiency gains per model: contracted capitated model 21.2%, contracted non-capitated model 15.5% and telemedicine model 3.7%. The remaining 8.5%, 5.6% and 22.5% respectively of the variation in total health care expenditure can be attributed to the effects of selection.Conclusions: Integrated care models have the potential to improve care for patients with chronic diseases and concurrently have a positive impact on health care expenditure. We suggest policy makers improve the incentives for patients with chronic diseases within the existing regulations providing further potential for cost-efficiency of medical care.
Computationally efficient statistical differential equation modeling using homogenization
Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.
2013-01-01
Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.
Improved storage efficiency through geologic modeling and reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Ammer, J.R.; Mroz, T.H.; Covatch, G.L.
1997-11-01
The US Department of Energy (DOE), through partnerships with industry, is demonstrating the importance of geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. The geologic modeling and reservoir simulation study for the Natural Fuel Gas Supply Corporation CRADA was completed in September 1995. The results of this study were presented at the 1995 Society of Petroleum Engineers` (SPE) Eastern Regional Meeting. Although there has been no field verification of the modeling results, the study has shown the potential advantages and cost savings opportunities of using horizontal wells for storage enhancement. The geologic modeling for the Equitrans` CRADA was completed in September 1995 and was also presented at the 1995 SPE Eastern Regional Meeting. The reservoir modeling of past field performance was completed in November 1996 and prediction runs are currently being made to investigate the potential of offering either a 10 day or 30 day peaking service in addition to the existing 110 day base load service. Initial results have shown that peaking services can be provided through remediation of well damage and by drilling either several new vertical wells or one new horizontal well. The geologic modeling for the Northern Indiana Public Service Company CRADA was completed in November 1996 with a horizontal well being completed in January 1997. Based on well test results, the well will significantly enhance gas deliverability from the field and will allow the utilization of gas from an area of the storage field that was not accessible from their existing vertical wells. Results are presented from these three case studies.
Efficient Lattice-Based Signcryption in Standard Model
Directory of Open Access Journals (Sweden)
Jianhua Yan
2013-01-01
Full Text Available Signcryption is a cryptographic primitive that can perform digital signature and public encryption simultaneously at a significantly reduced cost. This advantage makes it highly useful in many applications. However, most existing signcryption schemes are seriously challenged by the booming of quantum computations. As an interesting stepping stone in the post-quantum cryptographic community, two lattice-based signcryption schemes were proposed recently. But both of them were merely proved to be secure in the random oracle models. Therefore, the main contribution of this paper is to propose a new lattice-based signcryption scheme that can be proved to be secure in the standard model.
Pouly, Amaury; Graça, Daniel S
2012-01-01
\\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.
Particle capture efficiency in a multi-wire model for high gradient magnetic separation
Eisenträger, Almut
2014-07-21
High gradient magnetic separation (HGMS) is an efficient way to remove magnetic and paramagnetic particles, such as heavy metals, from waste water. As the suspension flows through a magnetized filter mesh, high magnetic gradients around the wires attract and capture the particles removing them from the fluid. We model such a system by considering the motion of a paramagnetic tracer particle through a periodic array of magnetized cylinders. We show that there is a critical Mason number (ratio of viscous to magnetic forces) below which the particle is captured irrespective of its initial position in the array. Above this threshold, particle capture is only partially successful and depends on the particle\\'s entry position. We determine the relationship between the critical Mason number and the system geometry using numerical and asymptotic calculations. If a capture efficiency below 100% is sufficient, our results demonstrate how operating the HGMS system above the critical Mason number but with multiple separation cycles may increase efficiency. © 2014 AIP Publishing LLC.
Efficient Execution Methods of Pivoting for Bulk Extraction of Entity-Attribute-Value-Modeled Data.
Luo, Gang; Frey, Lewis J
2016-03-01
Entity-attribute-value (EAV) tables are widely used to store data in electronic medical records and clinical study data management systems. Before they can be used by various analytical (e.g., data mining and machine learning) programs, EAV-modeled data usually must be transformed into conventional relational table format through pivot operations. This time-consuming and resource-intensive process is often performed repeatedly on a regular basis, e.g., to provide a daily refresh of the content in a clinical data warehouse. Thus, it would be beneficial to make pivot operations as efficient as possible. In this paper, we present three techniques for improving the efficiency of pivot operations: 1) filtering out EAV tuples related to unneeded clinical parameters early on; 2) supporting pivoting across multiple EAV tables; and 3) conducting multi-query optimization. We demonstrate the effectiveness of our techniques through implementation. We show that our optimized execution method of pivoting using these techniques significantly outperforms the current basic execution method of pivoting. Our techniques can be used to build a data extraction tool to simplify the specification of and improve the efficiency of extracting data from the EAV tables in electronic medical records and clinical study data management systems.
Energy Technology Data Exchange (ETDEWEB)
Lecuyer, Oskar [EDF R and D - Efese, 1 av du General de Gaulle, 92141 Clamart (France)] [CIRED, 45 bis av de la Belle-Gabrielle, 94736 Nogent-sur-Marne (France); Bibas, Ruben [CIRED, 45 bis av de la Belle-Gabrielle, 94736 Nogent-sur-Marne (France)
2012-01-15
In addition to the already present Climate and Energy package, the European Union (EU) plans to include a binding target to reduce energy consumption. We analyze the rationales the EU invokes to justify such an overlapping and develop a minimal common framework to study interactions arising from the combination of instruments reducing emissions, promoting renewable energy (RE) production and reducing energy demand through energy efficiency (EE) investments. We find that although all instruments tend to reduce GHG emissions and although a price on carbon tends also to give the right incentives for RE and EE, the combination of more than one instrument leads to significant antagonisms regarding major objectives of the policy package. The model allows to show in a single framework and to quantify the antagonistic effects of the joint promotion of RE and EE. We also show and quantify the effects of this joint promotion on ETS permit price, on wholesale market price and on energy production levels. (authors)
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Arjunan, Arun; Wang, Chang; English, Martin; Stanford, Mark; Lister, Paul
2015-01-01
Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consi...
Efficient Proof Engines for Bounded Model Checking of Hybrid Systems
DEFF Research Database (Denmark)
Fränzle, Martin; Herde, Christian
2005-01-01
In this paper we present HySat, a new bounded model checker for linear hybrid systems, incorporating a tight integration of a DPLL-based pseudo-Boolean SAT solver and a linear programming routine as core engine. In contrast to related tools like MathSAT, ICS, or CVC, our tool exploits all...
Practice what you preach: Microfinance business models and operational efficiency
Bos, J.W.B.; Millone, M.M.
2013-01-01
The microfinance sector is an example of a sector in which firms with different business models coexist. Next to pure for-profit microfinance institutions (MFIs), the sector has room for non-profit organizations, and includes 'social' for-profit firms that aim to maximize a double bot- tom line and
Generating efficient belief models for task-oriented dialogues
Taylor, J
1995-01-01
We have shown that belief modelling for dialogue can be simplified if the assumption is made that the participants are cooperating, i.e., they are not committed to any goals requiring deception. In such domains, there is no need to maintain individual representations of deeply nested beliefs; instead, three specific types of belief can be used to summarize all the states of nested belief that can exist about a domain entity. Here, we set out to design a ``compiler'' for belief models. This system will accept as input a description of agents' interactions with a task domain expressed in a fully-expressive belief logic with non-monotonic and temporal extensions. It generates an operational belief model for use in that domain, sufficient for the requirements of cooperative dialogue, including the negotiation of complex domain plans. The compiled model incorporates the belief simplification mentioned above, and also uses a simplified temporal logic of belief based on the restricted circumstances under which belie...
Techniques and tools for efficiently modeling multiprocessor systems
Carpenter, T.; Yalamanchili, S.
1990-01-01
System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.
Efficient sampling of Gaussian graphical models using conditional Bayes factors
Hinne, M.; Lenkoski, A.; Heskes, T.M.; Gerven, M.A.J. van
2014-01-01
Bayesian estimation of Gaussian graphical models has proven to be challenging because the conjugate prior distribution on the Gaussian precision matrix, the G-Wishart distribution, has a doubly intractable partition function. Recent developments provide a direct way to sample from the G-Wishart
Tenure Profiles and Efficient Separation in a Stochastic Productivity Model
I.S. Buhai (Sebastian); C.N. Teulings (Coen)
2005-01-01
textabstractThis paper provides a new way of analyzing tenure profiles in wages, by modelling simultaneously the evolution of wages and the distribution of tenures. Starting point is the observation that within-job log wages for an individual can be described by random walk. We develop a theoretical
CSIR Research Space (South Africa)
Meyers, BC
2011-09-01
Full Text Available ,2]. These inconsistencies are especially great when combustion is simulated when there are already flow inconsistencies after modeling the flow in cold flow simulations. To enable the improvement of CFD modeling and techniques, a CFD test case has been created to aid.... [7], attempts have to be made to ensure that as many of the factors that influence the combustor flow should be included in the tests. The combustor in which these experiments were performed is a full, non-premixed, cylindrical, can-type combustor...
Application of a mixed DEA model to evaluate relative efficiency validity
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Data envelopment analysis(DEA) model is widely used to evaluate the relative efficiency of producers. It is a kind of objective decision method with multiple indexes. However, the two basic models frequently used at present, the C2R model and the C2GS2 model have limitations when used alone,resulting in evaluations that are often unsatisfactory. In order to solve this problem, a mixed DEA model is built and is used to evaluate the validity of the business efficiency of listed companies. An explanation of how to use this mixed DEA model is offered and its feasibility is verified.
DEFF Research Database (Denmark)
Kempf, Stefan J; Metaxas, Athanasios; Ibáñez-Vea, María
2016-01-01
The aim of this study was to elucidate the molecular signature of Alzheimer's disease-associated amyloid pathology.We used the double APPswe/PS1ΔE9 mouse, a widely used model of cerebral amyloidosis, to compare changes in proteome, including global phosphorylation and sialylated N-linked glycosyl...
DEFF Research Database (Denmark)
Kempf, Stefan; Metaxas, Athanasios; Vea, Maria Ibanez
2016-01-01
The aim of this study was to elucidate the molecular signature of Alzheimer ́s disease-associated amyloid pathology. We used the double APPswe/PS1ΔE9 mouse, a widely used model of cerebral amyloidosis, to compare changes in proteome, including global phosphorylation and sialylated N-linked glycos...
Snopkowski, Kristin; Moya, Cristina; Sear, Rebecca
2014-08-07
Menopause remains an evolutionary puzzle, as humans are unique among primates in having a long post-fertile lifespan. One model proposes that intergenerational conflict in patrilocal populations favours female reproductive cessation. This model predicts that women should experience menopause earlier in groups with an evolutionary history of patrilocality compared with matrilocal groups. Using data from the Indonesia Family Life Survey, we test this model at multiple timescales: deep historical time, comparing age at menopause in ancestrally patrilocal Chinese Indonesians with ancestrally matrilocal Austronesian Indonesians; more recent historical time, comparing age at menopause in ethnic groups with differing postmarital residence within Indonesia and finally, analysing age at menopause at an individual-level, assuming a woman facultatively adjusts her age at menopause based on her postmarital residence. We find a significant effect only at the intermediate timescale where, contrary to predictions, ethnic groups with a history of multilocal postnuptial residence (where couples choose where to live) have the slowest progression to menopause, whereas matrilocal and patrilocal ethnic groups have similar progression rates. Multilocal residence may reduce intergenerational conflicts between women, thus influencing reproductive behaviour, but our results provide no support for the female-dispersal model of intergenerational conflict as an explanation of menopause.
Collins, Michael J.
2001-01-01
Presents a remarkable demonstration on chiralty in molecules and the existence of enantiomers, also known as non-superimposable mirror images. Uses a mirror, a physical model of a molecule, and a bit of trickery involving the non-superimposable mirror image. (Author/NB)
Goldhaber, Dan; Chaplin, Duncan
2012-01-01
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Goldhaber, Dan; Chaplin, Duncan
2012-01-01
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Brito, Marlon V; de Oliveira, Cleide; Salu, Bruno R; Andrade, Sonia A; Malloy, Paula M D; Sato, Ana C; Vicente, Cristina P; Sampaio, Misako U; Maffei, Francisco H A; Oliva, Maria Luiza V
2014-05-01
The Bauhinia bauhinioides Kallikrein Inhibitor (BbKI) is a Kunitz-type serine peptidase inhibitor of plant origin that has been shown to impair the viability of some tumor cells and to feature a potent inhibitory activity against human and rat plasma kallikrein (Kiapp 2.4 nmol/L and 5.2 nmol/L, respectively). This inhibitory activity is possibly responsible for an effect on hemostasis by prolonging activated partial thromboplastin time (aPTT). Because the association between cancer and thrombosis is well established, we evaluated the possible antithrombotic activity of this protein in venous and arterial thrombosis models. Vein thrombosis was studied in the vena cava ligature model in Wistar rats, and arterial thrombosis in the photochemical induced endothelium lesion model in the carotid artery of C57 black 6 mice. BbKI at a concentration of 2.0 mg/kg reduced the venous thrombus weight by 65% in treated rats in comparison to rats in the control group. The inhibitor prolonged the time for total artery occlusion in the carotid artery model mice indicating that this potent plasma kallikrein inhibitor prevented thrombosis.
Snopkowski, Kristin; Moya, Cristina; Sear, Rebecca
2014-01-01
Menopause remains an evolutionary puzzle, as humans are unique among primates in having a long post-fertile lifespan. One model proposes that intergenerational conflict in patrilocal populations favours female reproductive cessation. This model predicts that women should experience menopause earlier in groups with an evolutionary history of patrilocality compared with matrilocal groups. Using data from the Indonesia Family Life Survey, we test this model at multiple timescales: deep historical time, comparing age at menopause in ancestrally patrilocal Chinese Indonesians with ancestrally matrilocal Austronesian Indonesians; more recent historical time, comparing age at menopause in ethnic groups with differing postmarital residence within Indonesia and finally, analysing age at menopause at an individual-level, assuming a woman facultatively adjusts her age at menopause based on her postmarital residence. We find a significant effect only at the intermediate timescale where, contrary to predictions, ethnic groups with a history of multilocal postnuptial residence (where couples choose where to live) have the slowest progression to menopause, whereas matrilocal and patrilocal ethnic groups have similar progression rates. Multilocal residence may reduce intergenerational conflicts between women, thus influencing reproductive behaviour, but our results provide no support for the female-dispersal model of intergenerational conflict as an explanation of menopause. PMID:24966311
Directory of Open Access Journals (Sweden)
Qing Yang
2015-04-01
Full Text Available In order to realize economic and social green development, to pave a pathway towards China’s green regional development and develop effective scientific policy to assist in building green cities and countries, it is necessary to put forward a relatively accurate, scientific and concise green assessment method. The research uses the CCR (A. Charnes & W. W. Cooper & E. Rhodes Data Envelopment Analysis (DEA model to obtain the green development frontier surface based on 31 regions’ annual cross-section data from 2008–2012. Furthermore, in order to classify the regions whereby assessment values equal to 1 in the CCR model, we chose the Super-Efficiency DEA model for further sorting. Meanwhile, according to the five-year panel data, the green development efficiency changes of 31 regions can be manifested by the Malmquist index. Finally, the study assesses the reasons for regional differences; while analyzing and discussing the results may allude to a superior green development pathway for China.
Efficient Model for Distributed Computing based on Smart Embedded Agent
Directory of Open Access Journals (Sweden)
Hassna Bensag
2017-02-01
Full Text Available Technological advances of embedded computing exposed humans to an increasing intrusion of computing in their day-to-day life (e.g. smart devices. Cooperation, autonomy, and mobility made the agent a promising mechanism for embedded devices. The work aims to present a new model of an embedded agent designed to be implemented in smart devices in order to achieve parallel tasks in a distribute environment. To validate the proposed model, a case study was developed for medical image segmentation using Cardiac Magnetic Resonance Image (MRI. In the first part of this paper, we focus on implementing the parallel algorithm of classification using C-means method in embedded systems. We propose then a new concept of distributed classification using multi-agent systems based on JADE and Raspberry PI 2 devices.
Efficient numerical modeling of the cornea, and applications
Gonzalez, L.; Navarro, Rafael M.; Hdez-Matamoros, J. L.
2004-10-01
Corneal topography has shown to be an essential tool in the ophthalmology clinic both in diagnosis and custom treatments (refractive surgery, keratoplastia), having also a strong potential in optometry. The post processing and analysis of corneal elevation, or local curvature data, is a necessary step to refine the data and also to extract relevant information for the clinician. In this context a parametric cornea model is proposed consisting of a surface described mathematically by two terms: one general ellipsoid corresponding to a regular base surface, expressed by a general quadric term located at an arbitrary position and free orientation in 3D space and a second term, described by a Zernike polynomial expansion, which accounts for irregularities and departures from the basic geometry. The model has been validated obtaining better adjustment of experimental data than other previous models. Among other potential applications, here we present the determination of the optical axis of the cornea by transforming the general quadric to its canonical form. This has permitted us to perform 3D registration of corneal topographical maps to improve the signal-to-noise ratio. Other basic and clinical applications are also explored.
Efficient Love wave modelling via Sobolev gradient steepest descent
Browning, Matt; Ferguson, John; McMechan, George
2016-05-01
A new method for finding solutions to ordinary differential equation boundary value problems is introduced, in which Sobolev gradient steepest descent is used to determine eigenfunctions and eigenvalues simultaneously in an iterative scheme. The technique is then applied to the 1-D Love wave problem. The algorithm has several advantages when computing dispersion curves. It avoids the problem of mode skipping, and can handle arbitrary Earth structure profiles in depth. For a given frequency range, computation times scale approximately as the square root of the number of frequencies, and the computation of dispersion curves can be implemented in a fully parallel manner over the modes involved. The steepest descent solutions are within a fraction of a per cent of the analytic solutions for the first 25 modes for a two-layer model. Since all corresponding eigenfunctions are computed along with the dispersion curves, the impact on group and phase velocity of the displacement behaviour with depth is thoroughly examined. The dispersion curves are used to compute synthetic Love wave seismograms that include many higher order modes. An example includes addition of attenuation to a model with a low-velocity zone, with values as low as Q = 20. Finally, a confirming comparison is made with a layer matrix method on the upper 700 km of a whole Earth model.
Directory of Open Access Journals (Sweden)
H. Wan
2014-09-01
Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of
Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.
2014-09-01
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high
Lightning Detection Efficiency Analysis Process: Modeling Based on Empirical Data
Rompala, John T.
2005-01-01
A ground based lightning detection system employs a grid of sensors, which record and evaluate the electromagnetic signal produced by a lightning strike. Several detectors gather information on that signal s strength, time of arrival, and behavior over time. By coordinating the information from several detectors, an event solution can be generated. That solution includes the signal s point of origin, strength and polarity. Determination of the location of the lightning strike uses algorithms based on long used techniques of triangulation. Determination of the event s original signal strength relies on the behavior of the generated magnetic field over distance and time. In general the signal from the event undergoes geometric dispersion and environmental attenuation as it progresses. Our knowledge of that radial behavior together with the strength of the signal received by detecting sites permits an extrapolation and evaluation of the original strength of the lightning strike. It also limits the detection efficiency (DE) of the network. For expansive grids and with a sparse density of detectors, the DE varies widely over the area served. This limits the utility of the network in gathering information on regional lightning strike density and applying it to meteorological studies. A network of this type is a grid of four detectors in the Rondonian region of Brazil. The service area extends over a million square kilometers. Much of that area is covered by rain forests. Thus knowledge of lightning strike characteristics over the expanse is of particular value. I have been developing a process that determines the DE over the region [3]. In turn, this provides a way to produce lightning strike density maps, corrected for DE, over the entire region of interest. This report offers a survey of that development to date and a record of present activity.
Development of Efficient Models of Corona Discharges Around Tall Structures
Tucker, J.; Pasko, V. P.
2012-12-01
This work concerns with numerical modeling of glow corona and sreamer corona discharges that occur near tall ground structures under thunderstorm conditions. Glow corona can occur when ambient electric field reaches modest values on the order of 0.2 kV/cm and when the electric field near sharp points of ground structure rises above a geometry dependent critical field required for ionization of air. Air is continuously ionized in a small region close to the surface of the structure and ions diffuse out into the surrounding air forming a corona. A downward leader approaching from a thundercloud causes a further increase in the electric field at the ground level. If the electric field rises to the point where it can support formation of streamers in air surrounding the tall structure, a streamer corona flash, or series of streamer corona flashes can be formed significantly affecting the space charge configuration formed by the preceding glow corona. The streamer corona can heat the surrounding air enough to form a self-propagating thermalized leader that is launched upward from the tall structure. This leader travels upward towards the thundercloud and connects with the downward approaching leader thus causing a lightning flash. Accurate time-dependent modeling of charge configuration created by the glow and streamer corona discharges around tall structure is an important component for understanding of the sequence of events leading to lightning attachment to the tall structure. The present work builds on principal modeling ideas developed previously in [Aleksandrov et al., J. Phys. D: Appl. Phys., 38, 1225, 2005; Bazelyan et al., Plasma Sources Sci. Technol., 17, 024015, 2008; Kowalski, E. J., Honors Thesis, Penn State Univ., University Park, PA, May 2008; Tucker and Pasko, NSF EE REU Penn State Annual Res. J., 10, 13, 2012]. The non-stationary glow and streamer coronas are modeled in spherical geometry up to the point of initiation of the upward leader. The model
A hybrid model for the computationally-efficient simulation of the cerebellar granular layer
Directory of Open Access Journals (Sweden)
Anna eCattani
2016-04-01
Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.
Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling
DEFF Research Database (Denmark)
Albaek, Mads O.; Gernaey, Krist V.; Hansen, Morten S.
2012-01-01
Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression...... prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process...... was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency....
Directory of Open Access Journals (Sweden)
Samal Zhussupbekova
2016-07-01
Full Text Available A validated animal model would assist with research on the immunological consequences of the chronic expression of stress keratins KRT6, KRT16, and KRT17, as observed in human pre-malignant hyperproliferative epithelium. Here we examine keratin gene expression profile in skin from mice expressing the E7 oncoprotein of HPV16 (K14E7 demonstrating persistently hyperproliferative epithelium, in nontransgenic mouse skin, and in hyperproliferative actinic keratosis lesions from human skin. We demonstrate that K14E7 mouse skin overexpresses stress keratins in a similar manner to human actinic keratoses, that overexpression is a consequence of epithelial hyperproliferation induced by E7, and that overexpression further increases in response to injury. As stress keratins modify local immunity and epithelial cell function and differentiation, the K14E7 mouse model should permit study of how continued overexpression of stress keratins impacts on epithelial tumor development and on local innate and adaptive immunity.
Directory of Open Access Journals (Sweden)
Jian Ma
2014-01-01
Full Text Available Previous researches have proved the positive effect of creative human capital and its development on the development of economy. Yet, the technical efficiency of creative human capital and its effects are still under research. The authors are trying to estimate the technical efficiency value in Chinese context, which is adjusted by the environmental variables and statistical noises, by establishing a three-stage data envelopment analysis model, using data from 2003 to 2010. The research results indicate that, in this period, the entirety of creative human capital in China and the technical efficiency value in different regions and different provinces is still in the low level and could be promoted. Otherwise, technical non-efficiency is mostly derived from the scale nonefficiency and rarely affected by pure technical efficiency. The research also examines environmental variables’ marked effects on the technical efficiency, and it shows that different environmental variables differ in the aspect of their own effects. The expansion of the scale of education, development of healthy environment, growth of GDP, development of skill training, and population migration could reduce the input of creative human capital and promote the technical efficiency, while development of trade and institutional change, on the contrary, would block the input of creative human capital and the promotion the technical efficiency.
Study on the Supply Efficiency of Rural Public Service in China Based on Three-stage DEA Model
Institute of Scientific and Technical Information of China (English)
Zongbing; DENG; Chaoying; WU; Junliang; Zhang; Ju; WANG
2014-01-01
Improving the supply efficiency of rural public service is an important way to solve the severe shortage of rural public service. In this article,we use three-stage DEA model to carry out empirical research of the supply efficiency of rural public service in 31 provinces and regions of China. The results show that if without control over exogenous environment variables and random brunt,the classic DEA method will overestimate the rural public service efficiency; after controlling the impact of the external environmental factors,the mean of supply efficiency of rural public service in 31 provinces and regions of China is 0. 697; the improved rural per capita income,population density,population size and the educational level of residents,is a significantly favorable factor for enhancing the supply efficiency of rural public service,while the increase in the proportion of fiscal spending on rural public service to GDP plays no significant role in improving the rural public service efficiency; according to their efficiency type,the provinces and regions should adopt some measures,such as improving the management level or expanding the supply scale,to improve the supply efficiency.
Modeling the effect of substrate stoichiometry on microbial carbon use efficiency and soil C cycling
Abramoff, R. Z.; Tang, J.; Georgiou, K.; Brodie, E.; Torn, M. S.; Riley, W. J.
2015-12-01
Microorganisms degrade soil organic matter (SOM) and apportion newly acquired substrates into enzyme production, biomass growth, and respiration. The fraction of acquired substrate that is released into the atmosphere as heterotrophic respiration is determined by the microbial carbon use efficiency (CUE), commonly defined as the fraction of carbon uptake that is allocated to microbial growth and enzyme production. Despite recent demonstrations that changes in CUE can greatly affect predictions of global soil C stocks, most models do not incorporate process-level representation of CUE or how it varies with substrate stoichiometry. Here we introduce coupled C and N cycling into a prognostic CUE model that uses the dynamic energy budget theory to predict CUE at each time step. We solve this model over a range of substrate C:N to simulate the effects of N addition on CUE, and test the model against previously published measurements of CUE after nutrient enrichment with a range of substrates. We find that CUE declines with microbial N limitation due to C overflow and acquisition strategies that favor N immobilization. We also demonstrate that including an intracellular reserve pool in the model alleviates decreases in CUE by allowing excess C to be stored during periods of N limitation. Consistent with previous studies, we find that predictions of soil C stocks are highly sensitive to CUE. Furthermore, we show that interactive effects between substrate inputs and temperature result in a wide range of possible CUE values under global change scenarios.
Efficient Cartesian-grid-based modeling of rotationally symmetric bodies
DEFF Research Database (Denmark)
Shyroki, Dzmitry
2007-01-01
Axially symmetric waveguides, resonators, and scatterers of arbitrary cross section and anisotropy in the cross section can be modeled rigorously with use of 2-D Cartesian-grid based codes by means of mere redefinition of material permittivity and permeability profiles. The method is illustrated...... by the frequencydomain simulations of resonant modes in a circular-cylinder cavity with perfectly conducting walls, a shielded uniaxial anisotropic dielectric cylinder, and an open dielectric sphere for which, after proper implementation of the perfectly matched layer boundary conditions, the radiation quality factor...
Model-based efficiency evaluation of combine harvester traction drives
Directory of Open Access Journals (Sweden)
Steffen Häberle
2015-08-01
Full Text Available As part of the research the drive train of the combine harvesters is investigated in detail. The focus on load and power distribution, energy consumption and usage distribution are explicitly explored on two test machines. Based on the lessons learned during field operations, model-based studies of energy saving potential in the traction train of combine harvesters can now be quantified. Beyond that the virtual machine trial provides an opportunity to compare innovative drivetrain architectures and control solutions under reproducible conditions. As a result, an evaluation method is presented and generically used to draw comparisons under local representative operating conditions.
Efficient Beam-Type Structural Modeling of Rotor Blades
DEFF Research Database (Denmark)
Couturier, Philippe; Krenk, Steen
2015-01-01
The present paper presents two recently developed numerical formulations which enable accurate representation of the static and dynamic behaviour of wind turbine rotor blades using little modeling and computational effort. The first development consists of an intuitive method to extract fully...... coupled six by six cross-section stiffness matrices with limited meshing effort. Secondly, an equilibrium based beam element accepting directly the stiffness matrices and accounting for large variations in geometry and material along the blade is presented. The novel design tools are illustrated...
Modeling Large Time Series for Efficient Approximate Query Processing
DEFF Research Database (Denmark)
Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang
2015-01-01
Evolving customer requirements and increasing competition force business organizations to store increasing amounts of data and query them for information at any given time. Due to the current growth of data volumes, timely extraction of relevant information becomes more and more difficult...... these issues, compression techniques have been introduced in many areas of data processing. In this paper, we outline a new system that does not query complete datasets but instead utilizes models to extract the requested information. For time series data we use Fourier and Cosine transformations and piece...
Labrijn, Aran F; Meesters, Joyce I; Bunce, Matthew; Armstrong, Anthony A; Somani, Sandeep; Nesspor, Tom C; Chiu, Mark L; Altintaş, Işil; Verploegen, Sandra; Schuurman, Janine; Parren, Paul W H I
2017-05-30
Therapeutic concepts exploiting tumor-specific antibodies are often established in pre-clinical xenograft models using immuno-deficient mice. More complex therapeutic paradigms, however, warrant the use of immuno-competent mice, that more accurately capture the relevant biology that is being exploited. These models require the use of (surrogate) mouse or rat antibodies to enable optimal interactions with murine effector molecules. Immunogenicity is furthermore decreased, allowing longer-term treatment. We recently described controlled Fab-arm exchange (cFAE) as an easy-to-use method for the generation of therapeutic human IgG1 bispecific antibodies (bsAb). To facilitate the investigation of dual-targeting concepts in immuno-competent mice, we now applied and optimized our method for the generation of murine bsAbs. We show that the optimized combinations of matched point-mutations enabled efficient generation of murine bsAbs for all subclasses studied (mouse IgG1, IgG2a and IgG2b; rat IgG1, IgG2a, IgG2b, and IgG2c). The mutations did not adversely affect the inherent effector functions or pharmacokinetic properties of the corresponding subclasses. Thus, cFAE can be used to efficiently generate (surrogate) mouse or rat bsAbs for pre-clinical evaluation in immuno-competent rodents.
An efficient simulator of 454 data using configurable statistical models
Directory of Open Access Journals (Sweden)
Persson Bengt
2011-10-01
Full Text Available Abstract Background Roche 454 is one of the major 2nd generation sequencing platforms. The particular characteristics of 454 sequence data pose new challenges for bioinformatic analyses, e.g. assembly and alignment search algorithms. Simulation of these data is therefore useful, in order to further assess how bioinformatic applications and algorithms handle 454 data. Findings We developed a new application named 454sim for simulation of 454 data at high speed and accuracy. The program is multi-thread capable and is available as C++ source code or pre-compiled binaries. Sequence reads are simulated by 454sim using a set of statistical models for each chemistry. 454sim simulates recorded peak intensities, peak quality deterioration and it calculates quality values. All three generations of the Roche 454 chemistry ('GS20', 'GS FLX' and 'Titanium' are supported and defined in external text files for easy access and tweaking. Conclusions We present a new platform independent application named 454sim. 454sim is generally 200 times faster compared to previous programs and it allows for simple adjustments of the statistical models. These improvements make it possible to carry out more complex and rigorous algorithm evaluations in a reasonable time scale.
IS CAPM AN EFFICIENT MODEL? ADVANCED VERSUS EMERGING MARKETS
Directory of Open Access Journals (Sweden)
Iulian IHNATOV
2015-10-01
Full Text Available CAPM is one of the financial models most widely used by the investors all over the world for analyzing the correlation between risk and return, being considered a milestone in financial literature. However, in recently years it has been criticized for the unrealistic assumptions it is based on and for the fact that the expected returns it forecasts are wrong. The aim of this paper is to test statistically CAPM for a set of shares listed on New York Stock Exchange, Nasdaq, Warsaw Stock Exchange and Bucharest Stock Exchange (developed markets vs. emerging markets and to compare the expected returns resulted from CAPM with the actually returns. Thereby, we intend to verify whether the model is verified for Central and Eastern Europe capital market, mostly dominated by Poland, and whether the Polish and Romanian stock market index may faithfully be represented as market portfolios. Moreover, we intend to make a comparison between the results for Poland and Romania. After carrying out the analysis, the results confirm that the CAPM is statistically verified for all three capital markets, but it fails to correctly forecast the expected returns. This means that the investors can take wrong investments, bringing large loses to them.
Hines, David E.; Lisa, Jessica A.; Song, Bongkeun; Tobias, Craig R.; Borrett, Stuart R.
2012-06-01
Estuaries serve important ecological and economic functions including habitat provision and the removal of nutrients. Eutrophication can overwhelm the nutrient removal capacity of estuaries and poses a widely recognized threat to the health and function of these ecosystems. Denitrification and anaerobic ammonium oxidation (anammox) are microbial processes responsible for the removal of fixed nitrogen and diminish the effects of eutrophication. Both of these microbial removal processes can be influenced by direct inputs of dissolved inorganic nitrogen substrates or supported by microbial interactions with other nitrogen transforming pathways such as nitrification and dissimilatory nitrate reduction to ammonium (DNRA). The coupling of nitrogen removal pathways to other transformation pathways facilitates the removal of some forms of inorganic nitrogen; however, differentiating between direct and coupled nitrogen removal is difficult. Network modeling provides a tool to examine interactions among microbial nitrogen cycling processes and to determine the within-system history of nitrogen involved in denitrification and anammox. To examine the coupling of nitrogen cycling processes, we built a nitrogen budget mass balance network model in two adjacent 1 cm3 sections of bottom water and sediment in the oligohaline portion of the Cape Fear River Estuary, NC, USA. Pathway, flow, and environ ecological network analyses were conducted to characterize the organization of nitrogen flow in the estuary and to estimate the coupling of nitrification to denitrification and of nitrification and DNRA to anammox. Centrality analysis indicated NH4+ is the most important form of nitrogen involved in removal processes. The model analysis further suggested that direct denitrification and coupled nitrification-denitrification had similar contributions to nitrogen removal while direct anammox was dominant to coupled forms of anammox. Finally, results also indicated that partial
Efficient speaker verification using Gaussian mixture model component clustering.
Energy Technology Data Exchange (ETDEWEB)
De Leon, Phillip L. (New Mexico State University, Las Cruces, NM); McClanahan, Richard D.
2012-04-01
In speaker verification (SV) systems that employ a support vector machine (SVM) classifier to make decisions on a supervector derived from Gaussian mixture model (GMM) component mean vectors, a significant portion of the computational load is involved in the calculation of the a posteriori probability of the feature vectors of the speaker under test with respect to the individual component densities of the universal background model (UBM). Further, the calculation of the sufficient statistics for the weight, mean, and covariance parameters derived from these same feature vectors also contribute a substantial amount of processing load to the SV system. In this paper, we propose a method that utilizes clusters of GMM-UBM mixture component densities in order to reduce the computational load required. In the adaptation step we score the feature vectors against the clusters and calculate the a posteriori probabilities and update the statistics exclusively for mixture components belonging to appropriate clusters. Each cluster is a grouping of multivariate normal distributions and is modeled by a single multivariate distribution. As such, the set of multivariate normal distributions representing the different clusters also form a GMM. This GMM is referred to as a hash GMM which can be considered to a lower resolution representation of the GMM-UBM. The mapping that associates the components of the hash GMM with components of the original GMM-UBM is referred to as a shortlist. This research investigates various methods of clustering the components of the GMM-UBM and forming hash GMMs. Of five different methods that are presented one method, Gaussian mixture reduction as proposed by Runnall's, easily outperformed the other methods. This method of Gaussian reduction iteratively reduces the size of a GMM by successively merging pairs of component densities. Pairs are selected for merger by using a Kullback-Leibler based metric. Using Runnal's method of reduction, we
Efficiency of a Care Coordination Model: A Randomized Study with Stroke Patients
Claiborne, Nancy
2006-01-01
Objectives: This study investigated the efficiency of a social work care coordination model for stroke patients. Care coordination addresses patient care and treatment resources across the health care system to reduce risk, improve clinical outcomes, and maximize efficiency. Method: A randomly assigned, pre-post experimental design measured…
DEFF Research Database (Denmark)
Larsen, Ulrik; Pierobon, Leonardo; Wronski, Jorrit;
2014-01-01
to power. In this study we propose four linear regression models to predict the maximum obtainable thermal efficiency for simple and recuperated ORCs. A previously derived methodology is able to determine the maximum thermal efficiency among many combinations of fluids and processes, given the boundary...
Temming, A.
1994-01-01
A simple modification of Pauly's model for relating food conversion efficiency (K sub(1)) and body weight is proposed. The key parameter is an index to how efficiently food can be absorbed; the other parameter is related to the surface-limiting growth, an important component of von Bertalanff's and Pauly's theories of fish growth.
Pipeline for Efficient Mapping of Transcription Factor Binding Sites and Comparison of Their Models
Ba alawi, Wail
2011-06-01
The control of genes in every living organism is based on activities of transcription factor (TF) proteins. These TFs interact with DNA by binding to the TF binding sites (TFBSs) and in that way create conditions for the genes to activate. Of the approximately 1500 TFs in human, TFBSs are experimentally derived only for less than 300 TFs and only in generally limited portions of the genome. To be able to associate TF to genes they control we need to know if TFs will have a potential to interact with the control region of the gene. For this we need to have models of TFBS families. The existing models are not sufficiently accurate or they are too complex for use by ordinary biologists. To remove some of the deficiencies of these models, in this study we developed a pipeline through which we achieved the following: 1. Through a comparison analysis of the performance we identified the best models with optimized thresholds among the four different types of models of TFBS families. 2. Using the best models we mapped TFBSs to the human genome in an efficient way. The study shows that a new scoring function used with TFBS models based on the position weight matrix of dinucleotides with remote dependency results in better accuracy than the other three types of the TFBS models. The speed of mapping has been improved by developing a parallelized code and shows a significant speed up of 4x when going from 1 CPU to 8 CPUs. To verify if the predicted TFBSs are more accurate than what can be expected with the conventional models, we identified the most frequent pairs of TFBSs (for TFs E4F1 and ATF6) that appeared close to each other (within the distance of 200 nucleotides) over the human genome. We show unexpectedly that the genes that are most close to the multiple pairs of E4F1/ATF6 binding sites have a co-expression of over 90%. This indirectly supports our hypothesis that the TFBS models we use are more accurate and also suggests that the E4F1/ATF6 pair is exerting the
Modeling low cost hybrid tandem photovoltaics with the potential for efficiencies exceeding 20%
Beiley, Zach M.
2012-01-01
It is estimated that for photovoltaics to reach grid parity around the planet, they must be made with costs under $0.50 per W p and must also achieve power conversion efficiencies above 20% in order to keep installation costs down. In this work we explore a novel solar cell architecture, a hybrid tandem photovoltaic (HTPV), and show that it is capable of meeting these targets. HTPV is composed of an inexpensive and low temperature processed solar cell, such as an organic or dye-sensitized solar cell, that can be printed on top of one of a variety of more traditional inorganic solar cells. Our modeling shows that an organic solar cell may be added on top of a commercial CIGS cell to improve its efficiency from 15.1% to 21.4%, thereby reducing the cost of the modules by ∼15% to 20% and the cost of installation by up to 30%. This suggests that HTPV is a promising option for producing solar power that matches the cost of existing grid energy. © 2012 The Royal Society of Chemistry.
An Efficient Data-driven Tissue Deformation Model
DEFF Research Database (Denmark)
Mosbech, Thomas Hammershaimb; Ersbøll, Bjarne Kjær; Christensen, Lars Bager
2009-01-01
empirical data; 10 pig carcasses are subjected to deformation from a controlled source imitating the cutting tool. The tissue deformation is quantified by means of steel markers inserted into the carcass as a three-dimensional lattice. For each subject marker displacements are monitored through two...... consecutive computed tomography images - before and after deformation; tracing corresponding markers provides accurate information on the tissue deformation. To enable modelling of the observed deformations, the displacements are parameterised applying methods from point-based registration...... find an association between the first principal mode and the lateral movement. Furthermore, there is a link between this and the ratio of meat-fat quantity - a potentially very useful finding since existing tools for carcass grading and sorting measure equivalent quantities....
Modeling of efficient solid-state cooler on layered multiferroics.
Starkov, Ivan; Starkov, Alexander
2014-08-01
We have developed theoretical foundations for the design and optimization of a solid-state cooler working through caloric and multicaloric effects. This approach is based on the careful consideration of the thermodynamics of a layered multiferroic system. The main section of the paper is devoted to the derivation and solution of the heat conduction equation for multiferroic materials. On the basis of the obtained results, we have performed the evaluation of the temperature distribution in the refrigerator under periodic external fields. A few practical examples are considered to illustrate the model. It is demonstrated that a 40-mm structure made of 20 ferroic layers is able to create a temperature difference of 25K. The presented work tries to address the whole hierarchy of physical phenomena to capture all of the essential aspects of solid-state cooling.
Minsker, B. S.; Zimmer, A. L.; Ostfeld, A.; Schmidt, A.
2014-12-01
Enabling real-time decision support, particularly under conditions of uncertainty, requires computationally efficient algorithms that can rapidly generate recommendations. In this paper, a suite of model predictive control (MPC) genetic algorithms are developed and tested offline to explore their value for reducing CSOs during real-time use in a deep-tunnel sewer system. MPC approaches include the micro-GA, the probability-based compact GA, and domain-specific GA methods that reduce the number of decision variable values analyzed within the sewer hydraulic model, thus reducing algorithm search space. Minimum fitness and constraint values achieved by all GA approaches, as well as computational times required to reach the minimum values, are compared to large population sizes with long convergence times. Optimization results for a subset of the Chicago combined sewer system indicate that genetic algorithm variations with coarse decision variable representation, eventually transitioning to the entire range of decision variable values, are most efficient at addressing the CSO control problem. Although diversity-enhancing micro-GAs evaluate a larger search space and exhibit shorter convergence times, these representations do not reach minimum fitness and constraint values. The domain-specific GAs prove to be the most efficient and are used to test CSO sensitivity to energy costs, CSO penalties, and pressurization constraint values. The results show that CSO volumes are highly dependent on the tunnel pressurization constraint, with reductions of 13% to 77% possible with less conservative operational strategies. Because current management practices may not account for varying costs at CSO locations and electricity rate changes in the summer and winter, the sensitivity of the results is evaluated for variable seasonal and diurnal CSO penalty costs and electricity-related system maintenance costs, as well as different sluice gate constraint levels. These findings indicate
Directory of Open Access Journals (Sweden)
Belošević Srđan V.
2016-01-01
Full Text Available Pulverized coal-fired power plants should provide higher efficiency of energy conversion, flexibility in terms of boiler loads and fuel characteristics and emission reduction of pollutants like nitrogen oxides. Modification of combustion process is a cost-effective technology for NOx control. For optimization of complex processes, such as turbulent reactive flow in coal-fired furnaces, mathematical modeling is regularly used. The NOx emission reduction by combustion modifications in the 350 MWe Kostolac B boiler furnace, tangentially fired by pulverized Serbian lignite, is investigated in the paper. Numerical experiments were done by an in-house developed three-dimensional differential comprehensive combustion code, with fuel- and thermal-NO formation/destruction reactions model. The code was developed to be easily used by engineering staff for process analysis in boiler units. A broad range of operating conditions was examined, such as fuel and preheated air distribution over the burners and tiers, operation mode of the burners, grinding fineness and quality of coal, boiler loads, cold air ingress, recirculation of flue gases, water-walls ash deposition and combined effect of different parameters. The predictions show that the NOx emission reduction of up to 30% can be achieved by a proper combustion organization in the case-study furnace, with the flame position control. Impact of combustion modifications on the boiler operation was evaluated by the boiler thermal calculations suggesting that the facility was to be controlled within narrow limits of operation parameters. Such a complex approach to pollutants control enables evaluating alternative solutions to achieve efficient and low emission operation of utility boiler units. [Projekat Ministarstva nauke Republike Srbije, br. TR-33018: Increase in energy and ecology efficiency of processes in pulverized coal-fired furnace and optimization of utility steam boiler air preheater by using in
Proposal for initial collection efficiency models for direct granular upflow filtration
Directory of Open Access Journals (Sweden)
Alexandre Botari
2015-05-01
Full Text Available Mathematical models of the filtration process are based on the mass balance in the filter bed. Models of the filtration phenomenon describe the mass balance in bed filtration in terms of particle removal mechanisms, and allow for the determination of global particle removal efficiencies. This phenomenon is defined in terms of the geometry and the characteristic elements of granule collectors, particles and fluid, and the composition of the balance of forces that act in the particle collector system. This type of resolution is well known as the trajectory analysis theory. Particle trajectory analysis by mathematical correlation of the dimensionless numbers that represent fluid and particle characteristics is considered the main approach for mathematically modeling the initial collection efficiency of particle removal in water filtration. The existing initial collection efficiency models are designed for downflow filtration. This study analyzes initial collection efficiency models, and proposes an adaptation of these models to direct upflow filtration in a granular bed of coarse sand and gravel, taking into account the contribution of the gravitational factor of the settling removal efficiency in the proposal of initial collection efficiency models.
Global modeling of soil evaporation efficiency for a chosen soil type
Georgiana Stefan, Vivien; Mangiarotti, Sylvain; Merlin, Olivier; Chanzy, André
2016-04-01
One way of reproducing the dynamics of a system is by deriving a set of differential, difference or discrete equations directly from observational time series. A method for obtaining such a system is the global modeling technique [1]. The approach is here applied to the dynamics of soil evaporative efficiency (SEE), defined as the ratio of actual to potential evaporation. SEE is an interesting variable to study since it is directly linked to soil evaporation (LE) which plays an important role in the water cycle and since it can be easily derived from satellite measurements. One goal of the present work is to get a semi-empirical parameter that could account for the variety of the SEE dynamical behaviors resulting from different soil properties. Before trying to obtain such a semi-empirical parameter with the global modeling technique, it is first necessary to prove that this technique can be applied to the dynamics of SEE without any a priori information. The global modeling technique is thus applied here to a synthetic series of SEE, reconstructed from the TEC (Transfert Eau Chaleur) model [2]. It is found that an autonomous chaotic model can be retrieved for the dynamics of SEE. The obtained model is four-dimensional and exhibits a complex behavior. The comparison of the original and the model phase portraits shows a very good consistency that proves that the original dynamical behavior is well described by the model. To evaluate the model accuracy, the forecasting error growth is estimated. To get a robust estimate of this error growth, the forecasting error is computed for prediction horizons of 0 to 9 hours, starting from different initial conditions and statistics of the error growth are thus performed. Results show that, for a maximum error level of 40% of the signal variance, the horizon of predictability is close to 3 hours, approximately one third of the diurnal part of day. These results are interesting for various reasons. To the best of our knowledge
Yang, C; Yifan, L; Dan, L; Qian, Y; Ming-yan, J
2015-12-01
At present, most of the lipid-lowering drugs are western medicines, which have a lot of adverse reactions. Zhucha, an age-old Uyghur medicine, is made up of bamboo leaves and tea (green tea), which has good efficacy and lipid-lowering effect. The purpose of this study was to undertake a pharmacodynamic examination of the optimal proportions of bamboo leaf flavones and tea polyphenols required to achieve lipid lowering in rats. A hyperlipidemia rat model was used to examine the lipid lowering effects of bamboo leaf flavones and tea polyphenols. Wistar rats were divided into 13 groups including one hyperlipidemia model group and 2 positive drug groups as well as experimental groups (9 groups dosed with different proportions of bamboo leaf flavones and tea polyphenols, the 3 dosages of bamboo leaf flavones were 75 mg/kg/d, 50 mg/kg/d and 25 mg/kg/d respectively, the 3 dosages of tea polyphenol were 750 mg/kg/d, 500 mg/kg/d and 250 mg/kg/d). The weight, the levels of triglyceride (TG) and high-density lipoprotein cholesterol (HDL) were determined. A high dose of bamboo leaf flavones (75 mg/kg/d) combined with a medium dose of tea polyphenols (500 mg/kg/d) was deemed to be optimal for achieving a lipid-lowering effect, the weight had the smallest increase and the level of TG and HDL was similar to positive control. The bamboo leaf flavones and tea polyphenols were mixed according to a certain proportion (1:6.7), and the mixture achieved a lipid-lowering effect and might prove to be useful as a natural lipid-lowering agent.
Squitieri, Ferdinando; Di Pardo, Alba; Favellato, Mariagrazia; Amico, Enrico; Maglione, Vittorio; Frati, Luigi
2015-11-01
Huntington disease (HD) is a neurodegenerative disorder for which new treatments are urgently needed. Pridopidine is a new dopaminergic stabilizer, recently developed for the treatment of motor symptoms associated with HD. The therapeutic effect of pridopidine in patients with HD has been determined in two double-blind randomized clinical trials, however, whether pridopidine exerts neuroprotection remains to be addressed. The main goal of this study was to define the potential neuroprotective effect of pridopidine, in HD in vivo and in vitro models, thus providing evidence that might support a potential disease-modifying action of the drug and possibly clarifying other aspects of pridopidine mode-of-action. Our data corroborated the hypothesis of neuroprotective action of pridopidine in HD experimental models. Administration of pridopidine protected cells from apoptosis, and resulted in highly improved motor performance in R6/2 mice. The anti-apoptotic effect observed in the in vitro system highlighted neuroprotective properties of the drug, and advanced the idea of sigma-1-receptor as an additional molecular target implicated in the mechanism of action of pridopidine. Coherent with protective effects, pridopidine-mediated beneficial effects in R6/2 mice were associated with an increased expression of pro-survival and neurostimulatory molecules, such as brain derived neurotrophic factor and DARPP32, and with a reduction in the size of mHtt aggregates in striatal tissues. Taken together, these findings support the theory of pridopidine as molecule with disease-modifying properties in HD and advance the idea of a valuable therapeutic strategy for effectively treating the disease. © 2015 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.
Directory of Open Access Journals (Sweden)
Marina Aiello Padilla
2015-01-01
Full Text Available Extracts from termite-associated bacteria were evaluated for in vitro antiviral activity against bovine viral diarrhea virus (BVDV. Two bacterial strains were identified as active, with percentages of inhibition (IP equal to 98%. Both strains were subjected to functional analysis via the addition of virus and extract at different time points in cell culture; the results showed that they were effective as posttreatments. Moreover, we performed MTT colorimetric assays to identify the CC50, IC50, and SI values of these strains, and strain CDPA27 was considered the most promising. In parallel, the isolates were identified as Streptomyces through 16S rRNA gene sequencing analysis. Specifically, CDPA27 was identified as S. chartreusis. The CDPA27 extract was fractionated on a C18-E SPE cartridge, and the fractions were reevaluated. A 100% methanol fraction was identified to contain the compound(s responsible for antiviral activity, which had an SI of 262.41. GC-MS analysis showed that this activity was likely associated with the compound(s that had a peak retention time of 5 min. Taken together, the results of the present study provide new information for antiviral research using natural sources, demonstrate the antiviral potential of Streptomyces chartreusis compounds isolated from termite mounds against BVDV, and lay the foundation for further studies on the treatment of HCV infection.
The efficient global primitive equation climate model SPEEDO V2.0
Severijns, C.A.; Hazeleger, W.
2010-01-01
The efficient primitive-equation coupled atmosphere-ocean model SPEEDO V2.0 is presented. The model includes an interactive sea-ice and land component. SPEEDO is a global earth system model of intermediate complexity. It has a horizontal resolution of T30 (triangular truncation at wave number 30) an
The efficient global primitive equation climate model SPEEDO V2.0
Severijns, C.A.; Hazeleger, W.
2010-01-01
The efficient primitive-equation coupled atmosphere-ocean model SPEEDO V2.0 is presented. The model includes an interactive sea-ice and land component. SPEEDO is a global earth system model of intermediate complexity. It has a horizontal resolution of T30 (triangular truncation at wave number 30) an
Modelling and Understanding of Highly Energy Efficient Fluids
Thamali, R J K A; Liyanage, D D; Ukwatta, Ajith; Hewage, Jinasena; Witharana, Sanjeeva
2016-01-01
Conventional heat carrier liquids have demonstrated remarkable enhancement in heat and mass transfer when nanoparticles were suspended in them. These liquid-nanoparticle suspensions are now known as Nanofluids. However the relationship between nanoparticles and the degree of enhancement is still unclear, thus hindering the large scale manufacturing of them. Understanding of the energy and flow behaviour of nanofluids is therefore of wide interest in both academic and industrial context. In this paper we first model the heat transfer of a nanofluid in convection in a circular tube at macro-scale by using CFD code of OpenFoam. Then we zoon into nano-scale behaviour using the Molecular Dynamics (MD) simulation. In the latter we considered a system of water and Gold nanoparticles. A systematic increase of convective heat transfer was observed with increasing nanoparticle concentration. A maximum enhancement of 7.0% was achieved in comparison to base fluid water. This occurred when the gold volume fraction was 0.0...
A relativistic model of electron cyclotron current drive efficiency in tokamak plasmas
Lin-Liu Y.R.; Hu Y.J.; Hu Y.M.
2012-01-01
A fully relativistic model of electron cyclotron current drive (ECCD) efficiency based on the adjoint function techniques is considered. Numerical calculations of the current drive efficiency in a tokamak by using the variational approach are performed. A fully relativistic extension of the variational principle with the modified basis functions for the Spitzer function with momentum conservation in the electron-electron collision is described in general tokamak geometry. The model developed ...
A DEA model for measuring efficiency adapted to the hotel sector
Escobar Pérez, Bernabé; Lobo Gallardo, Antonio; Otero Terrón, José I.
2012-01-01
The aim of this paper is propose an improved model based on the technique of Data Envelopment Analysis (DEA) to measure hotel efficiency. For that purpose, an extensive literature review has been carried out, focused mainly on empirical research. As a result, the proposed model incorporates some new operational variables widely accepted within the sector, like the REVPAR indicator, and allows to adapt the efficiency analysis to the current economic conditions and industry...
Zeng, Ming; Li, Zhi-Yong; Ma, Jin; Cao, Ping-Ping; Wang, Heng; Cui, Yong-Hua; Liu, Zheng
2015-06-06
Phenotype of chronic rhinosinusitis (CRS) may be an important determining factor of the efficacy of anti-inflammatory treatments. Although both glucocorticoids and macrolide antibiotics have been recommended for the treatment of CRS, whether they have different anti-inflammatory functions for distinct phenotypic CRS has not been completely understood. The aim of this study is to compare the anti-inflammatory effects of clarithromycin and dexamethasone on sinonasal mucosal explants from different phenotypic CRS ex vivo. Ethmoid mucosal tissues from CRSsNP patients (n = 15), and polyp tissues from eosinophilic (n = 13) and non-eosinophilic (n = 12) CRSwNP patients were cultured in an ex vivo explant model with or without dexamethasone or clarithromycin treatment for 24 h. After culture, the production and/or expression of anti-inflammatory molecules, epithelial-derived cytokines, pro-inflammatory cytokines, T helper (Th)1, Th2 and Th17 cytokines, chemokines, dendritic cell relevant markers, pattern recognition receptors (PRRs), and tissue remodeling factors were detected in tissue explants or culture supernatants by RT-PCR or ELISA, respectively. We found that both clarithromycin and dexamethasone up-regulated the production of anti-inflammatory mediators (Clara cell 10-kDa protein and interleukin (IL)-10), whereas down-regulated the production of Th2 response and eosinophilia promoting molecules (thymic stromal lymphopoietin, IL-25, IL-33, CD80, CD86, OX40 ligand, programmed cell death ligand 1, CCL17, CCL22, CCL11, CCL5, IL-5, IL-13, and eosinophilic cationic protein) and Th1 response and neutrophilia promoting molecules (CXCL8, CXCL5, CXCL10, CXCL9, interferon-γ, and IL-12), from sinonasal mucosa from distinct phenotypic CRS. In contrast, they had no effect on IL-17A production. The expression of PRRs (Toll-like receptors and melanoma differentiation-associated gene 5) was induced, and the production of tissue remodeling factors (transforming growth factor-β1
Liu, Jie; Liu, Chun; Han, Wei
2016-10-01
Urban soil pollution is evaluated utilizing an efficient and simple algorithmic model referred to as the entropy method-based Topsis (EMBT) model. The model focuses on pollution source position to enhance the ability to analyze sources of pollution accurately. Initial application of EMBT to urban soil pollution analysis is actually implied. The pollution degree of sampling point can be efficiently calculated by the model with the pollution degree coefficient, which is efficiently attained by first utilizing the Topsis method to determine evaluation value and then by dividing the evaluation value of the sample point by background value. The Kriging interpolation method combines coordinates of sampling points with the corresponding coefficients and facilitates the formation of heavy metal distribution profile. A case study is completed with modeling results in accordance with actual heavy metal pollution, proving accuracy and practicality of the EMBT model.
Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling
Kadoura, Ahmad
2016-09-01
This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method to replace correlations and equations of state in subsurface flow simulators. In order to accelerate MC simulations, a set of early rejection schemes (conservative, hybrid, and non-conservative) in addition to extrapolation methods through reweighting and reconstruction of pre-generated MC Markov chains were developed. Furthermore, an extensive study was conducted to investigate sorption and transport processes of methane, carbon dioxide, water, and their mixtures in the inorganic part of shale using both MC and MD simulations. These simulations covered a wide range of thermodynamic conditions, pore sizes, and fluid compositions shedding light on several interesting findings. For example, the possibility to have more carbon dioxide adsorbed with more preadsorbed water concentrations at relatively large basal spaces. The dissertation is divided into four chapters. The first chapter corresponds to the introductory part where a brief background about molecular simulation and motivations are given. The second chapter is devoted to discuss the theoretical aspects and methodology of the proposed MC speeding up techniques in addition to the corresponding results leading to the successful multi-scale simulation of the compressible single-phase flow scenario. In chapter 3, the results regarding our extensive study on shale gas at laboratory conditions are reported. At the fourth and last chapter, we end the dissertation with few concluding remarks highlighting the key findings and summarizing the future directions.
Directory of Open Access Journals (Sweden)
Anne Lluch
2017-02-01
Full Text Available Dietary changes needed to achieve nutritional adequacy for 33 nutrients were determined for 1719 adults from a representative French national dietary survey. For each individual, an iso-energy nutritionally adequate diet was generated using diet modeling, staying as close as possible to the observed diet. The French food composition table was completed with free sugar (FS content. Results were analyzed separately for individuals with FS intakes in their observed diets ≤10% or >10% of their energy intake (named below FS-ACCEPTABLE and FS-EXCESS, respectively. The FS-EXCESS group represented 41% of the total population (average energy intake of 14.2% from FS. Compared with FS-ACCEPTABLE individuals, FS-EXCESS individuals had diets of lower nutritional quality and consumed more energy (2192 vs. 2123 kcal/day, particularly during snacking occasions (258 vs. 131 kcal/day (all p-values < 0.01. In order to meet nutritional targets, for both FS-ACCEPTABLE and FS-EXCESS individuals, the main dietary changes in optimized diets were significant increases in fresh fruits, starchy foods, water, hot beverages and plain yogurts; and significant decreases in mixed dishes/sandwiches, meat/eggs/fish and cheese. For FS-EXCESS individuals only, the optimization process significantly increased vegetables and significantly decreased sugar-sweetened beverages, sweet products and fruit juices. The diets of French adults with excessive intakes of FS are of lower nutritional quality, but can be optimized via specific dietary changes.
Lluch, Anne; Maillot, Matthieu; Gazan, Rozenn; Vieux, Florent; Delaere, Fabien; Vaudaine, Sarah; Darmon, Nicole
2017-02-20
Dietary changes needed to achieve nutritional adequacy for 33 nutrients were determined for 1719 adults from a representative French national dietary survey. For each individual, an iso-energy nutritionally adequate diet was generated using diet modeling, staying as close as possible to the observed diet. The French food composition table was completed with free sugar (FS) content. Results were analyzed separately for individuals with FS intakes in their observed diets ≤10% or >10% of their energy intake (named below FS-ACCEPTABLE and FS-EXCESS, respectively). The FS-EXCESS group represented 41% of the total population (average energy intake of 14.2% from FS). Compared with FS-ACCEPTABLE individuals, FS-EXCESS individuals had diets of lower nutritional quality and consumed more energy (2192 vs. 2123 kcal/day), particularly during snacking occasions (258 vs. 131 kcal/day) (all p-values fresh fruits, starchy foods, water, hot beverages and plain yogurts; and significant decreases in mixed dishes/sandwiches, meat/eggs/fish and cheese. For FS-EXCESS individuals only, the optimization process significantly increased vegetables and significantly decreased sugar-sweetened beverages, sweet products and fruit juices. The diets of French adults with excessive intakes of FS are of lower nutritional quality, but can be optimized via specific dietary changes.
Lluch, Anne; Maillot, Matthieu; Gazan, Rozenn; Vieux, Florent; Delaere, Fabien; Vaudaine, Sarah; Darmon, Nicole
2017-01-01
Dietary changes needed to achieve nutritional adequacy for 33 nutrients were determined for 1719 adults from a representative French national dietary survey. For each individual, an iso-energy nutritionally adequate diet was generated using diet modeling, staying as close as possible to the observed diet. The French food composition table was completed with free sugar (FS) content. Results were analyzed separately for individuals with FS intakes in their observed diets ≤10% or >10% of their energy intake (named below FS-ACCEPTABLE and FS-EXCESS, respectively). The FS-EXCESS group represented 41% of the total population (average energy intake of 14.2% from FS). Compared with FS-ACCEPTABLE individuals, FS-EXCESS individuals had diets of lower nutritional quality and consumed more energy (2192 vs. 2123 kcal/day), particularly during snacking occasions (258 vs. 131 kcal/day) (all p-values cheese. For FS-EXCESS individuals only, the optimization process significantly increased vegetables and significantly decreased sugar-sweetened beverages, sweet products and fruit juices. The diets of French adults with excessive intakes of FS are of lower nutritional quality, but can be optimized via specific dietary changes. PMID:28230722
Kempf, Stefan J.; Metaxas, Athanasios; Ibáñez-Vea, María; Darvesh, Sultan; Finsen, Bente; Larsen, Martin R.
2016-01-01
The aim of this study was to elucidate the molecular signature of Alzheimer's disease-associated amyloid pathology. We used the double APPswe/PS1ΔE9 mouse, a widely used model of cerebral amyloidosis, to compare changes in proteome, including global phosphorylation and sialylated N-linked glycosylation patterns, pathway-focused transcriptome and neurological disease-associated miRNAome with age-matched controls in neocortex, hippocampus, olfactory bulb and brainstem. We report that signalling pathways related to synaptic functions associated with dendritic spine morphology, neurite outgrowth, long-term potentiation, CREB signalling and cytoskeletal dynamics were altered in 12 month old APPswe/PS1ΔE9 mice, particularly in the neocortex and olfactory bulb. This was associated with cerebral amyloidosis as well as formation of argyrophilic tangle-like structures and microglial clustering in all brain regions, except for brainstem. These responses may be epigenetically modulated by the interaction with a number of miRNAs regulating spine restructuring, Aβ expression and neuroinflammation. We suggest that these changes could be associated with development of cognitive dysfunction in early disease states in patients with Alzheimer's disease. PMID:27144524
Directory of Open Access Journals (Sweden)
Yu-Jen Chang
Full Text Available Amniotic fluid stem cells (AFSCs are multipotent stem cells that may be used in transplantation medicine. In this study, AFSCs established from amniocentesis were characterized on the basis of surface marker expression and differentiation potential. To further investigate the properties of AFSCs for translational applications, we examined the cell surface expression of human leukocyte antigens (HLA of these cells and estimated the therapeutic effect of AFSCs in parkinsonian rats. The expression profiles of HLA-II and transcription factors were compared between AFSCs and bone marrow-derived mesenchymal stem cells (BMMSCs following treatment with γ-IFN. We found that stimulation of AFSCs with γ-IFN prompted only a slight increase in the expression of HLA-Ia and HLA-E, and the rare HLA-II expression could also be observed in most AFSCs samples. Consequently, the expression of CIITA and RFX5 was weakly induced by γ-IFN stimulation of AFSCs compared to that of BMMSCs. In the transplantation test, Sprague Dawley rats with 6-hydroxydopamine lesioning of the substantia nigra were used as a parkinsonian-animal model. Following the negative γ-IFN response AFSCs injection, apomorphine-induced rotation was reduced by 75% in AFSCs engrafted parkinsonian rats but was increased by 53% in the control group after 12-weeks post-transplantation. The implanted AFSCs were viable, and were able to migrate into the brain's circuitry and express specific proteins of dopamine neurons, such as tyrosine hydroxylase and dopamine transporter. In conclusion, the relative insensitivity AFSCs to γ-IFN implies that AFSCs might have immune-tolerance in γ-IFN inflammatory conditions. Furthermore, the effective improvement of AFSCs transplantation for apomorphine-induced rotation paves the way for the clinical application in parkinsonian therapy.
Krumm, Stefanie A; Yan, Dan; Hovingh, Elise S; Evers, Taylor J; Enkirch, Theresa; Reddy, G Prabhakar; Sun, Aiming; Saindane, Manohar T; Arrendale, Richard F; Painter, George; Liotta, Dennis C; Natchus, Michael G; von Messling, Veronika; Plemper, Richard K
2014-04-16
Measles virus is a highly infectious morbillivirus responsible for major morbidity and mortality in unvaccinated humans. The related, zoonotic canine distemper virus (CDV) induces morbillivirus disease in ferrets with 100% lethality. We report an orally available, shelf-stable pan-morbillivirus inhibitor that targets the viral RNA polymerase. Prophylactic oral treatment of ferrets infected intranasally with a lethal CDV dose reduced viremia and prolonged survival. Ferrets infected with the same dose of virus that received post-infection treatment at the onset of viremia showed low-grade viral loads, remained asymptomatic, and recovered from infection, whereas control animals succumbed to the disease. Animals that recovered also mounted a robust immune response and were protected against rechallenge with a lethal CDV dose. Drug-resistant viral recombinants were generated and found to be attenuated and transmission-impaired compared to the genetic parent virus. These findings may pioneer a path toward an effective morbillivirus therapy that could aid measles eradication by synergizing with vaccination to close gaps in herd immunity due to vaccine refusal.
A method to identify energy efficiency measures for factory systems based on qualitative modeling
Krones, Manuela
2017-01-01
Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...
The evaluation model of the enterprise energy efficiency based on DPSR.
Wei, Jin-Yu; Zhao, Xiao-Yu; Sun, Xue-Shan
2017-05-08
The reasonable evaluation of the enterprise energy efficiency is an important work in order to reduce the energy consumption. In this paper, an effective energy efficiency evaluation index system is proposed based on DPSR (Driving forces-Pressure-State-Response) with the consideration of the actual situation of enterprises. This index system which covers multi-dimensional indexes of the enterprise energy efficiency can reveal the complete causal chain which includes the "driver forces" and "pressure" of the enterprise energy efficiency "state" caused by the internal and external environment, and the ultimate enterprise energy-saving "response" measures. Furthermore, the ANP (Analytic Network Process) and cloud model are used to calculate the weight of each index and evaluate the energy efficiency level. The analysis of BL Company verifies the feasibility of this index system and also provides an effective way to improve the energy efficiency at last.
Efficiency optimization and symmetry-breaking in a model of ciliary locomotion
Michelin, Sebastien
2010-01-01
A variety of swimming microorganisms, called ciliates, exploit the bending of a large number of small and densely-packed organelles, termed cilia, in order to propel themselves in a viscous fluid. We consider a spherical envelope model for such ciliary locomotion where the dynamics of the individual cilia are replaced by that of a continuous overlaying surface allowed to deform tangentially to itself. Employing a variational approach, we determine numerically the time-periodic deformation of such surface which leads to low-Reynolds locomotion with minimum rate of energy dissipation (maximum efficiency). Employing both Lagrangian and Eulerian points of views, we show that in the optimal swimming stroke, individual cilia display weak asymmetric beating, but that a significant symmetry-breaking occurs at the organism level, with the whole surface deforming in a wave-like fashion reminiscent of metachronal waves of biological cilia. This wave motion is analyzed using a formal modal decomposition, is found to occu...
Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models
Peixoto, Tiago P
2014-01-01
We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.
Efficiency and comfort of knee braces: A parametric study based on computational modelling
Pierrat, Baptiste; Calmels, Paul; Navarro, Laurent; Avril, Stéphane
2014-01-01
Knee orthotic devices are widely proposed by physicians and medical practitioners for preventive or therapeutic objectives in relation with their effects, usually known as to stabilize joint or restrict ranges of motion. This study focuses on the understanding of force transfer mechanisms from the brace to the joint thanks to a Finite Element Model. A Design Of Experiments approach was used to characterize the stiffness and comfort of various braces in order to identify their mechanically influent characteristics. Results show conflicting behavior: influent parameters such as the brace size or textile stiffness improve performance in detriment of comfort. Thanks to this computational tool, novel brace designs can be tested and evaluated for an optimal mechanical efficiency of the devices and a better compliance of the patient to the treatment.
Wybo, Willem A M; Boccalini, Daniele; Torben-Nielsen, Benjamin; Gewaltig, Marc-Oliver
2015-12-01
We prove that when a class of partial differential equations, generalized from the cable equation, is defined on tree graphs and the inputs are restricted to a spatially discrete, well chosen set of points, the Green's function (GF) formalism can be rewritten to scale as O(n) with the number n of inputs locations, contrary to the previously reported O(n(2)) scaling. We show that the linear scaling can be combined with an expansion of the remaining kernels as sums of exponentials to allow efficient simulations of equations from the aforementioned class. We furthermore validate this simulation paradigm on models of nerve cells and explore its relation with more traditional finite difference approaches. Situations in which a gain in computational performance is expected are discussed.
Neuville, Amélie; Schmittbuhl, Jean; 10.1111/j.1365-246X.2011.05126.x
2011-01-01
Natural open joints in rocks commonly present multi-scale self-affine apertures. This geometrical complexity affects fluid transport and heat exchange between the flow- ing fluid and the surrounding rock. In particular, long range correlations of self-affine apertures induce strong channeling of the flow which influences both mass and heat advection. A key question is to find a geometrical model of the complex aperture that describes at best the macroscopic properties (hydraulic conductivity, heat exchange) with the smallest number of parameters. Solving numerically the Stokes and heat equa- tions with a lubrication approximation, we show that a low pass filtering of the aperture geometry provides efficient estimates of the effective hydraulic and thermal properties (apertures). A detailed study of the influence of the bandwidth of the lowpass filtering on these transport properties is also performed. For instance, keeping the information of amplitude only of the largest Fourier length scales allows us to rea...
Condition for Energy Efficient Watermarking with Random Vector Model without WSS Assumption
Yan, Bin; Guo, Yinjing
2009-01-01
Energy efficient watermarking preserves the watermark energy after linear attack as much as possible. We consider in this letter non-stationary signal models and derive conditions for energy efficient watermarking under random vector model without WSS assumption. We find that the covariance matrix of the energy efficient watermark should be proportional to host covariance matrix to best resist the optimal linear removal attacks. In WSS process our result reduces to the well known power spectrum condition. Intuitive geometric interpretation of the results are also discussed which in turn also provide more simpler proof of the main results.
Fuel Efficient Diesel Particulate Filter (DPF) Modeling and Development
Energy Technology Data Exchange (ETDEWEB)
Stewart, Mark L.; Gallant, Thomas R.; Kim, Do Heui; Maupin, Gary D.; Zelenyuk, Alla
2010-08-01
The project described in this report seeks to promote effective diesel particulate filter technology with minimum fuel penalty by enhancing fundamental understanding of filtration mechanisms through targeted experiments and computer simulations. The overall backpressure of a filtration system depends upon complex interactions of particulate matter and ash with the microscopic pores in filter media. Better characterization of these phenomena is essential for exhaust system optimization. The acicular mullite (ACM) diesel particulate filter substrate is under continuing development by Dow Automotive. ACM is made up of long mullite crystals which intersect to form filter wall framework and protrude from the wall surface into the DPF channels. ACM filters have been demonstrated to effectively remove diesel exhaust particles while maintaining relatively low backpressure. Modeling approaches developed for more conventional ceramic filter materials, such as silicon carbide and cordierite, have been difficult to apply to ACM because of properties arising from its unique microstructure. Penetration of soot into the high-porosity region of projecting crystal structures leads to a somewhat extended depth filtration mode, but with less dramatic increases in pressure drop than are normally observed during depth filtration in cordierite or silicon carbide filters. Another consequence is greater contact between the soot and solid surfaces, which may enhance the action of some catalyst coatings in filter regeneration. The projecting crystals appear to provide a two-fold benefit for maintaining low backpressures during filter loading: they help prevent soot from being forced into the throats of pores in the lower porosity region of the filter wall, and they also tend to support the forming filter cake, resulting in lower average cake density and higher permeability. Other simulations suggest that soot deposits may also tend to form at the tips of projecting crystals due to the axial
An efficient flexible-order model for coastal and ocean water waves
DEFF Research Database (Denmark)
Engsig-Karup, Allan Peter; Bingham, Harry B.; Lindberg, Ole
of structures. The mathemathical equations for potential waves in the physical domain is transformed through $\\sigma$-mapping(s) to a time-invariant boundary-fitted domain which then becomes a basis for an efficient solution strategy. The improved 3D numerical model is based on a finite difference method......Current work are directed toward the development of an improved numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included for flexibility in the description...... properties of the numerical model together with the latests achievements....
Directory of Open Access Journals (Sweden)
Ronald Wesonga
2013-04-01
Full Text Available The study employs determinants of the aircraft departure delay to estimate airport efficiency. Two main parameters were applied to fit the stochastic frontier model using transcendental logarithmic function where both frontier and inefficiency models were generated. The estimated airport efficiencies over a period of 1827 days applying the half-normal and exponential distributions for the inefficiency error terms were (0.7498; δ=0.1417, n=1827 and (0.8181; δ=0.1224, n=1827 respectively. The correlation coefficient for the efficiency estimates (ρ=0.9791, n=1827, p<0.05 between the half-normal and exponential distributions showed no significant statistical difference. Further analysis showed that airport inefficiency was significantly associated with higher number of persons on board, lower visibility level, lower air pressure tendency, higher wind speed and a higher proportion of arrival aircraft delays. The study offers a contribution towards assessing the dynamics for the distribution of inefficient error term to estimate airport efficiency by employing both meteorological and aviation parameters. The study recommends that although either half-normal or exponential distributions could be used; the exponential distribution for the error term was found more suitable when estimating the efficiency score for the airport.
Paprotny, Dominik; Morales Nápoles, Oswaldo
2016-04-01
Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.
Meller, Michael; Chipka, Jordan; Volkov, Alexander; Bryant, Matthew; Garcia, Ephrahim
2016-11-03
Hydraulic control systems have become increasingly popular as the means of actuation for human-scale legged robots and assistive devices. One of the biggest limitations to these systems is their run time untethered from a power source. One way to increase endurance is by improving actuation efficiency. We investigate reducing servovalve throttling losses by using a selective recruitment artificial muscle bundle comprised of three motor units. Each motor unit is made up of a pair of hydraulic McKibben muscles connected to one servovalve. The pressure and recruitment state of the artificial muscle bundle can be adjusted to match the load in an efficient manner, much like the firing rate and total number of recruited motor units is adjusted in skeletal muscle. A volume-based effective initial braid angle is used in the model of each recruitment level. This semi-empirical model is utilized to predict the efficiency gains of the proposed variable recruitment actuation scheme versus a throttling-only approach. A real-time orderly recruitment controller with pressure-based thresholds is developed. This controller is used to experimentally validate the model-predicted efficiency gains of recruitment on a robot arm. The results show that utilizing variable recruitment allows for much higher efficiencies over a broader operating envelope.
Transport efficiency and workload distribution in a mathematical model of the thick ascending limb.
Nieves-González, Aniel; Clausen, Chris; Layton, Anita T; Layton, Harold E; Moore, Leon C
2013-03-15
The thick ascending limb (TAL) is a major NaCl reabsorbing site in the nephron. Efficient reabsorption along that segment is thought to be a consequence of the establishment of a strong transepithelial potential that drives paracellular Na(+) uptake. We used a multicell mathematical model of the TAL to estimate the efficiency of Na(+) transport along the TAL and to examine factors that determine transport efficiency, given the condition that TAL outflow must be adequately dilute. The TAL model consists of a series of epithelial cell models that represent all major solutes and transport pathways. Model equations describe luminal flows, based on mass conservation and electroneutrality constraints. Empirical descriptions of cell volume regulation (CVR) and pH control were implemented, together with the tubuloglomerular feedback (TGF) system. Transport efficiency was calculated as the ratio of total net Na(+) transport (i.e., paracellular and transcellular transport) to transcellular Na(+) transport. Model predictions suggest that 1) the transepithelial Na(+) concentration gradient is a major determinant of transport efficiency; 2) CVR in individual cells influences the distribution of net Na(+) transport along the TAL; 3) CVR responses in conjunction with TGF maintain luminal Na(+) concentration well above static head levels in the cortical TAL, thereby preventing large decreases in transport efficiency; and 4) under the condition that the distribution of Na(+) transport along the TAL is quasi-uniform, the tubular fluid axial Cl(-) concentration gradient near the macula densa is sufficiently steep to yield a TGF gain consistent with experimental data.
Garrido-Baserba, Manel; Sobhani, Reza; Asvapathanagul, Pitiporn; McCarthy, Graham W; Olson, Betty H; Odize, Victory; Al-Omari, Ahmed; Murthy, Sudhir; Nifong, Andrea; Godwin, Johnnie; Bott, Charles B; Stenstrom, Michael K; Shaw, Andrew R; Rosso, Diego
2017-03-15
This research systematically studied the behavior of aeration diffuser efficiency over time, and its relation to the energy usage per diffuser. Twelve diffusers were selected for a one year fouling study. Comprehensive aeration efficiency projections were carried out in two WRRFs with different influent rates, and the influence of operating conditions on aeration diffusers' performance was demonstrated. This study showed that the initial energy use, during the first year of operation, of those aeration diffusers located in high rate systems (with solids retention time - SRT-less than 2 days) increased more than 20% in comparison to the conventional systems (2 > SRT). Diffusers operating for three years in conventional systems presented the same fouling characteristics as those deployed in high rate processes for less than 15 months. A new procedure was developed to accurately project energy consumption on aeration diffusers; including the impacts of operation conditions, such SRT and organic loading rate, on specific aeration diffusers materials (i.e. silicone, polyurethane, EPDM, ceramic). Furthermore, it considers the microbial colonization dynamics, which successfully correlated with the increase of energy consumption (r(2):0.82 ± 7). The presented energy model projected the energy costs and the potential savings for the diffusers after three years in operation in different operating conditions. Whereas the most efficient diffusers provided potential costs spanning from 4900 USD/Month for a small plant (20 MGD, or 74,500 m(3)/d) up to 24,500 USD/Month for a large plant (100 MGD, or 375,000 m(3)/d), other diffusers presenting less efficiency provided spans from 18,000USD/Month for a small plant to 90,000 USD/Month for large plants. The aim of this methodology is to help utilities gain more insight into process mechanisms and design better energy efficiency strategies at existing facilities to reduce energy consumption. Copyright © 2016 Elsevier Ltd. All
Model-based process analysis of partial nitrification efficiency under dynamic nitrogen loading.
Güven, Didem; Kutlu, Ozgül; Insel, Güçlü; Sözen, Seval
2009-08-01
In this study, the ammonia removal efficiency for high ammonia-containing wastewaters was evaluated via partial nitrification. A nitrifier biocommunity was first enriched in a fill-and-draw batch reactor with a specific ammonium oxidation rate of 0.1 mg NH(4) (-)-N/mg VSS.h. Partial nitrification was established in a chemostat at a hydraulic retention time (HRT) of 1.15 days, which was equal to the sludge retention time (SRT). The results showed that the critical HRT (SRT) was 1.0 day for the system. A maximum specific ammonium oxidation rate was achieved as 0.280 mg NH(4) (-)-N/mg VSS.h, which is 2.8-fold higher than that obtained in the fill-and-draw reactor, indicating that more adaptive and highly active ammonium oxidizers were enriched in the chemostat. Dynamic modeling of partial nitrification showed that the maximum growth rate for ammonium oxidizers was found to be 1.22 day(-1). Modeling studies also validated the recovery period as 10 days.
Oh, K. S.; Schutt-Aine, J.
1995-01-01
Modeling of interconnects and associated discontinuities with the recent advances high-speed digital circuits has gained a considerable interest over the last decade although the theoretical bases for analyzing these structures were well-established as early as the 1960s. Ongoing research at the present time is focused on devising methods which can be applied to more general geometries than the ones considered in earlier days and, at the same time, improving the computational efficiency and accuracy of these methods. In this thesis, numerically efficient methods to compute the transmission line parameters of a multiconductor system and the equivalent capacitances of various strip discontinuities are presented based on the quasi-static approximation. The presented techniques are applicable to conductors embedded in an arbitrary number of dielectric layers with two possible locations of ground planes at the top and bottom of the dielectric layers. The cross-sections of conductors can be arbitrary as long as they can be described with polygons. An integral equation approach in conjunction with the collocation method is used in the presented methods. A closed-form Green's function is derived based on weighted real images thus avoiding nested infinite summations in the exact Green's function; therefore, this closed-form Green's function is numerically more efficient than the exact Green's function. All elements associated with the moment matrix are computed using the closed-form formulas. Various numerical examples are considered to verify the presented methods, and a comparison of the computed results with other published results showed good agreement.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Directory of Open Access Journals (Sweden)
Jinwei Wang
2014-01-01
Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei
2016-02-01
Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".
Efficient parallel implementation of active appearance model fitting algorithm on GPU.
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
An efficient positive potential-density pair expansion for modelling galaxies
Rojas-Niño, Armando; Aguilar, Luis; Delorme, Maxime
2016-01-01
We present a novel positive potential-density pair expansion for modelling galaxies, based on the Miyamoto-Nagai (MN) disc. By using three sets of such discs, each one of them aligned along each symmetry axis, we are able to reconstruct a broad range of potentials that correspond to density profiles from exponential discs to 3D power law models with varying triaxiality (henceforth simply "twisted" models). We increase the efficiency of our expansion by allowing the scale length parameter of each disc to be negative. We show that, for suitable priors on the scale length and height parameters, these "MNn discs" have just one negative density minimum. This allows us to ensure global positivity by demanding that the total density at the global minimum is positive. We find that at better than 10\\% accuracy in our density reconstruction, we can represent a radial and vertical exponential disc over $0.1-10$ scale lengths/heights with 4 MNn discs, an NFW profile over $0.1-10$ scale lengths with 4 MNn discs, and a twi...
A physiological foundation for the nutrition-based efficiency wage model
DEFF Research Database (Denmark)
Dalgaard, Carl-Johan Lars; Strulik, Holger
2011-01-01
. By extending the model with respect to heterogeneity in worker body size and a physiologically founded impact of body size on productivity, we demonstrate that the nutrition-based efficiency wage model is compatible with the empirical regularity that taller workers simultaneously earn higher wages and are less...
An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model
Energy Technology Data Exchange (ETDEWEB)
Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)
2016-06-15
Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.
Efficient uncertainty quantification methodologies for high-dimensional climate land models
Energy Technology Data Exchange (ETDEWEB)
Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2011-11-01
In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.
DEFF Research Database (Denmark)
Boyd, Britta; Brem, Alexander; Bogers, Marcel
energy network and innovative business models for energy efficient solutions. In order to carry out the above research activities, a first-stage screening of Southern Danish and Northern German companies that are energy efficient and of high performance is crucial to our research. By conducting...... efficiency and innovation management. Then, based on the findings open and collaborative business models could be suggested. Open business models get more important because innovation no longer takes place within a single organization, but are distributed across stakeholders in a value network (Bogers & West......The growing dynamics of innovation and productivity affect businesses in most industries and countries. Companies face these challenges by constantly developing new technologies and business models - the logic with which they create and capture value (Afuah, 2014; Osterwalder & Pigneur, 2010; Zott...
New approach to determine common weights in DEA efficiency evaluation model
Institute of Scientific and Technical Information of China (English)
Feng Yang; Chenchen Yang; Liang Liang; Shaofu Du
2010-01-01
Data envelopment analysis(DEA)is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Paroto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a groat number of DMUs.Finally a numeral example is given to demonstrate the approach.
Eco-Efficiency Model for Evaluating Feedlot Rations in the Great Plains, United States.
Hengen, Tyler J; Sieverding, Heidi L; Cole, Noel A; Ham, Jay M; Stone, James J
2016-07-01
Environmental impacts attributable to beef feedlot production provide an opportunity for economically linked efficiency optimization. Eco-efficiency models are used to optimize production and processes by connecting and quantifying environmental and economic impacts. An adaptable, objective eco-efficiency model was developed to assess the impacts of dietary rations on beef feedlot environmental and fiscal cost. The hybridized model used California Net Energy System modeling, life cycle assessment, principal component analyses (PCA), and economic analyses. The model approach was based on 38 potential feedlot rations and four transportation scenarios for the US Great Plains for each ration to determine the appropriate weight of each impact. All 152 scenarios were then assessed through a nested PCA to determine the relative contributing weight of each impact and environmental category to the overall system. The PCA output was evaluated using an eco-efficiency model. Results suggest that water, ecosystem, and human health emissions were the primary impact category drivers for feedlot eco-efficiency scoring. Enteric CH emissions were the greatest individual contributor to environmental performance (5.7% of the overall assessment), whereas terrestrial ecotoxicity had the lowest overall contribution (0.2% of the overall assessment). A well-balanced ration with mid-range dietary and processing energy requirements yielded the most eco- and environmentally efficient system. Using these results, it is possible to design a beef feed ration that is more economical and environmentally friendly. This methodology can be used to evaluate eco-efficiency and to reduce researcher bias of other complex systems.
Institute of Scientific and Technical Information of China (English)
CHENMing; TANGTiantong; CHENGuangde; ZHANGPenghui
2003-01-01
Analysis of electro - electro power transmission efficiency of SAW IDTs (Surface-acoustic-wave Interdigital transducers) system is studied, which is based on the newly developed simple six-port equivalent circuit model for SAW IDTs. The application of this method to the SAW IDTs with 80 fingers system on Y-Z LiNbO3 for acousto-optical interaction is also investigated, and it shows that the calculated result is in agreement with experimental one.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
DEFF Research Database (Denmark)
Gregg, Jay Sterling; Balyk, Olexandr; Pérez, Cristian Hernán Cabrera
The objectives of the Sustainable Energy for All (SE4ALL), a United Nations (UN) global initiative, are to achieve, by 2030: 1) universal access to modern energy services; 2) a doubling of the global rate of improvement in energy efficiency; and 3) a doubling of the share of renewable energy in t...... including updating data, setting constraints, and reporting on output. The presentation also addresses the addition of new model components such as traditional biomass and building energy efficiency....
Directory of Open Access Journals (Sweden)
Nengcheng Chen
2015-07-01
Full Text Available Remote sensing plays an important role in flood mapping and is helping advance flood monitoring and management. Multi-scale flood mapping is necessary for dividing floods into several stages for comprehensive management. However, existing data systems are typically heterogeneous owing to the use of different access protocols and archiving metadata models. In this paper, we proposed a sharable and efficient metadata model (APEOPM for constructing an Earth observation (EO data system to retrieve remote sensing data for flood mapping. The proposed model contains two sub-models, an access protocol model and an enhanced encoding model. The access protocol model helps unify heterogeneous access protocols and can achieve intelligent access via a semantic enhancement method. The enhanced encoding model helps unify a heterogeneous archiving metadata model. Wuhan city, one of the most important cities in the Yangtze River Economic Belt in China, is selected as a study area for testing the retrieval of heterogeneous EO data and flood mapping. The past torrential rain period from 25 March 2015 to 10 April 2015 is chosen as the temporal range in this study. To aid in comprehensive management, mapping is conducted at different spatial and temporal scales. In addition, the efficiency of data retrieval is analyzed, and validation between the flood maps and actual precipitation was conducted. The results show that the flood map coincided with the actual precipitation.
Directory of Open Access Journals (Sweden)
In Kyu Park
2002-10-01
Full Text Available Estimation of the shape dissimilarity between 3D models is a very important problem in both computer vision and graphics for 3D surface reconstruction, modeling, matching, and compression. In this paper, we propose a novel method called surface roving technique to estimate the shape dissimilarity between 3D models. Unlike conventional methods, our surface roving approach exploits a virtual camera and Z-buffer, which is commonly used in 3D graphics. The corresponding points on different 3D models can be easily identified, and also the distance between them is determined efficiently, regardless of the representation types of the 3D models. Moreover, by employing the viewpoint sampling technique, the overall computation can be greatly reduced so that the dissimilarity is obtained rapidly without loss of accuracy. Experimental results show that the proposed algorithm achieves fast and accurate measurement of shape dissimilarity for different types of 3D object models.
Paprotny, Dominik; Morales-Nápoles, Oswaldo; Jonkman, Sebastiaan N.
2017-07-01
Flood hazard is currently being researched on continental and global scales, using models of increasing complexity. In this paper we investigate a different, simplified approach, which combines statistical and physical models in place of conventional rainfall-run-off models to carry out flood mapping for Europe. A Bayesian-network-based model built in a previous study is employed to generate return-period flow rates in European rivers with a catchment area larger than 100 km2. The simulations are performed using a one-dimensional steady-state hydraulic model and the results are post-processed using Geographical Information System (GIS) software in order to derive flood zones. This approach is validated by comparison with Joint Research Centre's (JRC) pan-European map and five local flood studies from different countries. Overall, the two approaches show a similar performance in recreating flood zones of local maps. The simplified approach achieved a similar level of accuracy, while substantially reducing the computational time. The paper also presents the aggregated results on the flood hazard in Europe, including future projections. We find relatively small changes in flood hazard, i.e. an increase of flood zones area by 2-4 % by the end of the century compared to the historical scenario. However, when current flood protection standards are taken into account, the flood-prone area increases substantially in the future (28-38 % for a 100-year return period). This is because in many parts of Europe river discharge with the same return period is projected to increase in the future, thus making the protection standards insufficient.
Velichko, A.; Wilcox, P. D.
2012-05-01
An efficient technique for predicting the complete scattering behavior for an arbitrarily-shaped scatterer is presented. The spatial size of the modeling domain around the scatterer is as small as possible to minimize computational expense and a minimum number of models are executed. This model uses non-reflecting boundary conditions on the surface surrounding the scatterer which are non-local in space. Example results for 2D and 3D scattering in isotropic material and guided wave scattering are presented.
Sugaya, Nobuyoshi
2014-10-27
The concept of ligand efficiency (LE) indices is widely accepted throughout the drug design community and is frequently used in a retrospective manner in the process of drug development. For example, LE indices are used to investigate LE optimization processes of already-approved drugs and to re-evaluate hit compounds obtained from structure-based virtual screening methods and/or high-throughput experimental assays. However, LE indices could also be applied in a prospective manner to explore drug candidates. Here, we describe the construction of machine learning-based regression models in which LE indices are adopted as an end point and show that LE-based regression models can outperform regression models based on pIC50 values. In addition to pIC50 values traditionally used in machine learning studies based on chemogenomics data, three representative LE indices (ligand lipophilicity efficiency (LLE), binding efficiency index (BEI), and surface efficiency index (SEI)) were adopted, then used to create four types of training data. We constructed regression models by applying a support vector regression (SVR) method to the training data. In cross-validation tests of the SVR models, the LE-based SVR models showed higher correlations between the observed and predicted values than the pIC50-based models. Application tests to new data displayed that, generally, the predictive performance of SVR models follows the order SEI > BEI > LLE > pIC50. Close examination of the distributions of the activity values (pIC50, LLE, BEI, and SEI) in the training and validation data implied that the performance order of the SVR models may be ascribed to the much higher diversity of the LE-based training and validation data. In the application tests, the LE-based SVR models can offer better predictive performance of compound-protein pairs with a wider range of ligand potencies than the pIC50-based models. This finding strongly suggests that LE-based SVR models are better than pIC50-based
MATLAB Programs for the Super Efficiency DEA Model%MATLAB在超效率DEA模型中的应用
Institute of Scientific and Technical Information of China (English)
刘展; 屈聪
2014-01-01
利用数学软件MATLAB编写了便于使用超效率DEA模型的计算程序，并利用该程序对河南省2002-2011年财政科技投入的超效率进行计算与分析，实证分析表明该MATLAB计算程序十分的方便、有效。%In this study the Super Efficiency DEA model is programmed with MATLAB.The MATLAB programs are further used to compute and analyze the efficiencies of finance investment on science and technology in Henan Province during 2002-2011.The empirical analysis shows that these MATLAB programs are very convenient and efficient.
Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering
Directory of Open Access Journals (Sweden)
P. Bodin
2012-04-01
Full Text Available The separation of global radiation (R_{g} into its direct (R_{b} and diffuse constituents (R_{g} is important when modeling plant photosynthesis because a high R_{d}:R_{g} ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies, simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model was developed by Goudriaan (1977 (GOU. However, compared to more complex models, this model's realism is limited by its lack of explicit treatment of radiation scattering.
Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.
Efficient modeling of sun/shade canopy radiation dynamics explicitly accounting for scattering
Directory of Open Access Journals (Sweden)
P. Bodin
2011-08-01
Full Text Available The separation of global radiation (R_{g} into its direct (R_{b} and diffuse constituents (R_{d} is important when modeling plant photosynthesis because a high R_{d}:R_{g} ratio has been shown to enhance Gross Primary Production (GPP. To include this effect in vegetation models, the plant canopy must be separated into sunlit and shaded leaves, for example using an explicit 3-dimensional ray tracing model. However, because such models are often too intractable and computationally expensive for theoretical or large scale studies simpler sun-shade approaches are often preferred. A widely used and computationally efficient sun-shade model is a model originally developed by Goudriaan (1977 (GOU, which however does not explicitly account for radiation scattering.
Here we present a new model based on the GOU model, but which in contrast explicitly simulates radiation scattering by sunlit leaves and the absorption of this radiation by the canopy layers above and below (2-stream approach. Compared to the GOU model our model predicts significantly different profiles of scattered radiation that are in better agreement with measured profiles of downwelling diffuse radiation. With respect to these data our model's performance is equal to a more complex and much slower iterative radiation model while maintaining the simplicity and computational efficiency of the GOU model.
Numerical flow simulation and efficiency prediction for axial turbines by advanced turbulence models
Jošt, D.; Škerlavaj, A.; Lipej, A.
2012-11-01
Numerical prediction of an efficiency of a 6-blade Kaplan turbine is presented. At first, the results of steady state analysis performed by different turbulence models for different operating regimes are compared to the measurements. For small and optimal angles of runner blades the efficiency was quite accurately predicted, but for maximal blade angle the discrepancy between calculated and measured values was quite large. By transient analysis, especially when the Scale Adaptive Simulation Shear Stress Transport (SAS SST) model with zonal Large Eddy Simulation (ZLES) in the draft tube was used, the efficiency was significantly improved. The improvement was at all operating points, but it was the largest for maximal discharge. The reason was better flow simulation in the draft tube. Details about turbulent structure in the draft tube obtained by SST, SAS SST and SAS SST with ZLES are illustrated in order to explain the reasons for differences in flow energy losses obtained by different turbulence models.
Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael P.
2010-11-01
We present a computationally efficient adaptive method for calculating the time evolution of the concentrations of chemical species in global 3-D models of atmospheric chemistry. Our strategy consists of partitioning the computational domain into fast and slow regions for each chemical species at every time step. In each grid box, we group the fast species and solve for their concentration in a coupled fashion. Concentrations of the slow species are calculated using a simple semi-implicit formula. Separation of species between fast and slow is done on the fly based on their local production and loss rates. This allows for example to exclude short-lived volatile organic compounds (VOCs) and their oxidation products from chemical calculations in the remote troposphere where their concentrations are negligible, letting the simulation determine the exclusion domain and allowing species to drop out individually from the coupled chemical calculation as their production/loss rates decline. We applied our method to a 1-year simulation of global tropospheric ozone-NO x-VOC-aerosol chemistry using the GEOS-Chem model. Results show a 50% improvement in computational performance for the chemical solver, with no significant added error.
Efficient non-negative constrained model-based inversion in optoacoustic tomography
Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis
2015-09-01
The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.
A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.
Directory of Open Access Journals (Sweden)
Md Zobaer Hasan
Full Text Available Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA, is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.
A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.
Energy Technology Data Exchange (ETDEWEB)
Vittone, E., E-mail: ettore.vittone@unito.it [Department of Physics, NIS Research Centre and CNISM, University of Torino, via P. Giuria 1, 10125 Torino (Italy); Pastuovic, Z. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Breese, M.B.H. [Centre for Ion Beam Applications (CIBA), Department of Physics, National University of Singapore, Singapore 117542 (Singapore); Garcia Lopez, J. [Centro Nacional de Aceleradores (CNA), Sevilla University, J. Andalucia, CSIC, Av. Thomas A. Edison 7, 41092 Sevilla (Spain); Jaksic, M. [Department for Experimental Physics, Ruder Boškovic Institute (RBI), P.O. Box 180, 10002 Zagreb (Croatia); Raisanen, J. [Department of Physics, University of Helsinki, Helsinki 00014 (Finland); Siegele, R. [Centre for Accelerator Science (ANSTO), Locked bag 2001, Kirrawee DC, NSW 2234 (Australia); Simon, A. [International Atomic Energy Agency (IAEA), Vienna International Centre, P.O. Box 100, 1400 Vienna (Austria); Institute of Nuclear Research of the Hungarian Academy of Sciences (ATOMKI), Debrecen (Hungary); Vizkelethy, G. [Sandia National Laboratories (SNL), PO Box 5800, Albuquerque, NM (United States)
2016-04-01
Highlights: • We study the electronic degradation of semiconductors induced by ion irradiation. • The experimental protocol is based on MeV ion microbeam irradiation. • The radiation induced damage is measured by IBIC. • The general model fits the experimental data in the low level damage regime. • Key parameters relevant to the intrinsic radiation hardness are extracted. - Abstract: This paper investigates both theoretically and experimentally the charge collection efficiency (CCE) degradation in silicon diodes induced by energetic ions. Ion Beam Induced Charge (IBIC) measurements carried out on n- and p-type silicon diodes which were previously irradiated with MeV He ions show evidence that the CCE degradation does not only depend on the mass, energy and fluence of the damaging ion, but also depends on the ion probe species and on the polarization state of the device. A general one-dimensional model is derived, which accounts for the ion-induced defect distribution, the ionization profile of the probing ion and the charge induction mechanism. Using the ionizing and non-ionizing energy loss profiles resulting from simulations based on the binary collision approximation and on the electrostatic/transport parameters of the diode under study as input, the model is able to accurately reproduce the experimental CCE degradation curves without introducing any phenomenological additional term or formula. Although limited to low level of damage, the model is quite general, including the displacement damage approach as a special case and can be applied to any semiconductor device. It provides a method to measure the capture coefficients of the radiation induced recombination centres. They can be considered indexes, which can contribute to assessing the relative radiation hardness of semiconductor materials.
Berends, Constantijn J.; van de Wal, Roderik S. W.
2016-12-01
Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land-ocean mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land-ocean mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. This implies that it is now feasible to include the calving of ice in lakes as a dynamical process inside an ice-sheet model. We demonstrate this by using bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet-sea-level equation model at 30 000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, which is not defined beforehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor. The algorithm can be used for all glaciological and hydrological models, which need to trace the evolution over time of lakes or drainage basins in general.
Stochastic Frontier Model for Cost and Profit Efficiency of Islamic Online Banks
Directory of Open Access Journals (Sweden)
AZIZUL BATEN
2014-05-01
Full Text Available Are Islamic online banks stable and efficient? This paper addresses this question. Parametric technique, Stochastic Frontier Analysis is used to evaluate and compare the cost and profit efficiency of the Islamic banks in Bangladesh over the period of 2001-2010. The specification of functional forms of Translog stochastic cost and profit frontier models are developed. Translog stochastic cost and profit frontier models were found preferable than Cobb-Douglas production function. In case of cost model, other earning assets arefound negative but significant and price of labor is observed positive and significant. On the other hand, price of fund with the value of (-0.421 is found significant and negative for profit model, suggest that bank can control more personnel expenses than depositor profit expenses. The year-wise average cost inefficiency and profit efficiency were observed 43.9% and 82% respectively. IBBL was recorded as the most profit efficient bank and ICB limited bank was observed as the most cost inefficient bank. IBBL, Al-Arafah and EXIM banks were more stable in terms of cost efficient than other Islamic banks
Directory of Open Access Journals (Sweden)
Clémentine Dressaire
2009-12-01
Full Text Available This genome-scale study analysed the various parameters influencing protein levels in cells. To achieve this goal, the model bacterium Lactococcus lactis was grown at steady state in continuous cultures at different growth rates, and proteomic and transcriptomic data were thoroughly compared. Ratios of mRNA to protein were highly variable among proteins but also, for a given gene, between the different growth conditions. The modeling of cellular processes combined with a data fitting modeling approach allowed both translation efficiencies and degradation rates to be estimated for each protein in each growth condition. Estimated translational efficiencies and degradation rates strongly differed between proteins and were tested for their biological significance through statistical correlations with relevant parameters such as codon or amino acid bias. These efficiencies and degradation rates were not constant in all growth conditions and were inversely proportional to the growth rate, indicating a more efficient translation at low growth rate but an antagonistic higher rate of protein degradation. Estimated protein median half-lives ranged from 23 to 224 min, underlying the importance of protein degradation notably at low growth rates. The regulation of intracellular protein level was analysed through regulatory coefficient calculations, revealing a complex control depending on protein and growth conditions. The modeling approach enabled translational efficiencies and protein degradation rates to be estimated, two biological parameters extremely difficult to determine experimentally and generally lacking in bacteria. This method is generic and can now be extended to other environments and/or other micro-organisms.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
An efficient model to simulate stable glow corona discharges and their transition into streamers
Liu, Lipeng; Becerra, Marley
2017-03-01
A computationally efficient model to evaluate stable glow corona discharges and their transition into streamers is proposed. The simplified physical model referred to as the SPM is based on the classic hydrodynamic model of charge particles and a quasi-steady state approximation for electrons. The solution follows a two-step segregated procedure, which solves sequentially the stationary continuity equation for electrons and then time-dependent continuity equations for ions. The validity of using the SPM to simulate glow corona discharges and their transition into streamers is demonstrated by performing comparisons with a fully coupled physical model (FPM) and with experimental data available in the literature for air under atmospheric conditions. It is shown that the SPM can obtain estimates similar to those calculated with the FPM and those measured in experiments but using significantly less computation time. Since the proposed model simulates efficiently the ionization layer without prior knowledge of the surface electric field or the discharge current, it is a computationally efficient alternative to calculations of glow corona discharges based on Kaptzov’s approximation (KAM). The model can also be employed to efficiently calculate the conditions for the transition of glow corona into streamers, overcoming the limitations of KAM to provide such estimates.
Fence - An Efficient Parser with Ambiguity Support for Model-Driven Language Specification
Quesada, Luis; Cortijo, Francisco J
2011-01-01
Model-based language specification has applications in the implementation of language processors, the design of domain-specific languages, model-driven software development, data integration, text mining, natural language processing, and corpus-based induction of models. Model-based language specification decouples language design from language processing and, unlike traditional grammar-driven approaches, which constrain language designers to specific kinds of grammars, it needs general parser generators able to deal with ambiguities. In this paper, we propose Fence, an efficient bottom-up parsing algorithm with lexical and syntactic ambiguity support that enables the use of model-based language specification in practice.
Morshed, Monjur; Ingalls, Brian; Ilie, Silvana
2017-01-01
Sensitivity analysis characterizes the dependence of a model's behaviour on system parameters. It is a critical tool in the formulation, characterization, and verification of models of biochemical reaction networks, for which confident estimates of parameter values are often lacking. In this paper, we propose a novel method for sensitivity analysis of discrete stochastic models of biochemical reaction systems whose dynamics occur over a range of timescales. This method combines finite-difference approximations and adaptive tau-leaping strategies to efficiently estimate parametric sensitivities for stiff stochastic biochemical kinetics models, with negligible loss in accuracy compared with previously published approaches. We analyze several models of interest to illustrate the advantages of our method.
Farhat-McHayleh, Nada; Harfouche, Alice; Souaid, Philippe
2009-05-01
Tell-show-do is the most popular technique for managing children"s behaviour in dentists" offices. Live modelling is used less frequently, despite the satisfactory results obtained in studies conducted during the 1980s. The purpose of this study was to compare the effects of these 2 techniques on children"s heart rates during dental treatments, heart rate being the simplest biological parameter to measure and an increase in heart rate being the most common physiologic indicator of anxiety and fear. For this randomized, controlled, parallel-group single-centre clinical trial, children 5 to 9 years of age presenting for the first time to the Saint Joseph University dental care centre in Beirut, Lebanon, were divided into 3 groups: those in groups A and B were prepared for dental treatment by means of live modelling, the mother serving as the model for children in group A and the father as the model for children in group B. The children in group C were prepared by a pediatric dentist using the tell-show-do method. Each child"s heart rate was monitored during treatment, which consisted of an oral examination and cleaning. A total of 155 children met the study criteria and participated in the study. Children who received live modelling with the mother as model had lower heart rates than those who received live modelling with the father as model and those who were prepared by the tell-show-do method (p dentistry.
The Time-Dependent FX-SABR Model: Efficient Calibration based on Effective Parameters
Stoep, van der, H.; Grzelak, Lech Aleksander; OOSTERLEE, Cornelis
2014-01-01
We present a framework for efficient calibration of the time-dependent SABR model (Fern´andez et al. (2013) Mathematics and Computers in Simulation 94, 55–75; Hagan et al. (2002) Wilmott Magazine 84–108; Osajima (2007) Available at SSRN 965265.) in an foreign exchange (FX) context. In a similar fashion as in (Piterbarg (2005) Risk 18 (5), 71–75) we derive effective parameters, which yield an accurate and efficient calibration. On top of the calibrated FX-SABR model, we add a non-parametric lo...
Fundamentals of PV Efficiency Interpreted by a Two-Level Model
Alam, Muhammad A
2012-01-01
Elementary physics of photovoltaic energy conversion in a two-level atomic PV is considered. We explain the conditions for which the Carnot efficiency is reached and how it can be exceeded! The loss mechanisms - thermalization, angle entropy, and below-bandgap transmission - explain the gap between Carnot efficiency and the Shockley-Queisser limit. Wide varieties of techniques developed to reduce these losses (e.g., solar concentrators, solar-thermal, tandem cells, etc.) are reinterpreted by using a two level model. Remarkably, the simple model appears to capture the essence of PV operation and reproduce the key results and important insights that are known to the experts through complex derivations.
Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn
2012-07-01
In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.
ARCH Models Efficiency Evaluation in Prediction and Poultry Price Process Formation
Directory of Open Access Journals (Sweden)
Behzad Fakari Sardehae
2016-09-01
. This study shows that the heterogeneous variance exists in error term and indicated by LM-test. Results and Discussion: Results showed that stationary test of the poultry price has a unit root and is stationary with one lag difference, and thus the price of poultry was used in the study by one lag difference. Main results showed that ARCH is the best model for fluctuation prediction. Moreover, news has asymmetric effect on poultry price fluctuation and good news has a stronger effect on poultry price fluctuation than bad news and leverage effect doesnot existin poultry price. Moreover current fluctuation does not transmit to future. One of the main assumptions of time series models is constant variance in estimated coefficients. If this assumption has not been, the estimated coefficients for the correlation between the serial data would be biased and results in wrong interpretation. The results showed that ARCH effects existed in error terms of poultry price and so the ARCH family with student t distribution should be used. Normality test of error term and exam of heterogeneous variance needed and lack of attention to its cause false conclusion. Result showed that ARCH models have good predictive power and ARMA models are less efficient than ARCH models. It shows that non-linear predictions are better than linear prediction. According to the results that student distribution should be used as target distribution in estimated patterns. Conclusion: Huge need for poultry, require the creation of infrastructure to response to demands. Results showed that change in poultry price volatility over time, may intensifies at anytime. The asymmetric effect of good and bad news in poultry price leading to consumer's reaction. The good news had significant effects on the poultry market and created positive change in the poultry price, but the bad news did not result insignificant effects. In fact, because the poultry product in the household portfolio is essential, it should not
A model for improving energy efficiency in industrial motor system using multicriteria analysis
Energy Technology Data Exchange (ETDEWEB)
Herrero Sola, Antonio Vanderley, E-mail: sola@utfpr.edu.br [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil); Mota, Caroline Maria de Miranda, E-mail: carolmm@ufpe.br [Federal University of Pernambuco, Cx. Postal 7462, CEP 50630-970, Recife (Brazil); Kovaleski, Joao Luiz [Federal University of Technology, Parana, Brazil (UTFPR)-Campus Ponta Grossa, Av. Monteiro Lobato, Km 4, CEP: 84016-210 (Brazil)
2011-06-15
In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: > Lack of decision model in industrial motor system is the main motivation of the research. > A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. > The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.
Silicon solar cells reaching the efficiency limits: from simple to complex modelling
Kowalczewski, Piotr; Redorici, Lisa; Bozzola, Angelo; Andreani, Lucio Claudio
2016-05-01
Numerical modelling is pivotal in the development of high efficiency solar cells. In this contribution we present different approaches to model the solar cell performance: the diode equation, a generalization of the well-known Hovel model, and a complete device modelling. In all three approaches we implement a Lambertian light trapping, which is often considered as a benchmark for the optical design of solar cells. We quantify the range of parameters for which all three approaches give the same results, and highlight the advantages and limitations of different models. Using these methods we calculate the efficiency limits of single-junction crystalline silicon solar cells in a wide range of cell thickness. We find that silicon solar cells close to the efficiency limits operate in the high-injection (rather than in the low-injection) regime. In such a regime, surface recombination can have an unexpectedly large effect on cells with the absorber thickness lower than a few tens of microns. Finally, we calculate the limiting efficiency of tandem silicon-perovskite solar cells, and we determine the optimal thickness of the bottom silicon cell for different band gaps of the perovskite material.
The Super‑efficiency Model and its Use for Ranking and Identification of Outliers
Directory of Open Access Journals (Sweden)
Kristína Kočišová
2017-01-01
Full Text Available This paper employs non‑radial and non‑oriented super‑efficiency SBM model under the assumption of a variable return to scale to analyse performance of twenty‑two Czech and Slovak domestic commercial banks in 2015. The banks were ranked according to asset‑oriented and profit‑oriented intermediation approach. We pooled the cross‑country data and used them to define a common best‑practice efficiency frontier. This allowed us to focus on determining relative differences in efficiency across banks. The average efficiency was evaluated separately on the “national” and “international” level. Based on the results of analysis can be seen that in Slovak banking sector the level of super‑efficiency was lower compared to Czech banks. Also, the number of super‑efficient banks was lower in a case of Slovakia under both approaches. The boxplot analysis was used to determine the outliers in the dataset. The results suggest that the exclusion of outliers led to the better statistical characteristic of estimated efficiency.
Energy Technology Data Exchange (ETDEWEB)
Mundaca, Luis; Neij, Lena
2009-10-15
-economy models, empirical literature shows that a larger variety of determinants need to be taken into account when analysing the process of adoption of efficient technologies. We then focus on the analysis of more than twenty case studies addressing the application of the reviewed modelling methodologies to the field of residential energy efficiency policy. Regarding policy instruments being evaluated, the majority of the cases focus on regulatory aspects, such as minimum performance standards and building codes. For the rest, evaluations focus on economically-driven policy instruments. The dominance of economic and engineering determinants for technology choice gives little room for the representation of informative policy instruments. In all cases, policy instruments are represented through technical factors and costs of measures for energy efficiency improvements. In addition, policy instruments tend to be modelled in an idealistic or oversimplified manner. The traditional but narrow single-criterion evaluation approach based on cost-effectiveness seems to dominate the limited number of evaluation studies. However, this criterion is inappropriate to comprehensively address the attributes of policy instruments and the institutional and market conditions in which they work. We then turn to identifying research areas that have the potential to further advance modelling tools. We first discuss modelling issues as such, including the importance of transparent modelling efforts; the explicit elaboration of methodologies to represent policies; the need to better translate modelling results into a set of concrete policy recommendations; and the use of complementary research methods to better comprehend the broad effects and attributes of policy instruments. Secondly, we approach techno-economic and environmental components of models. We discuss the integration of co-benefits as a key research element of modelling studies; the introduction of transaction costs to further improve the
Directory of Open Access Journals (Sweden)
Y. Tramblay
2011-01-01
Full Text Available A good knowledge of rainfall is essential for hydrological operational purposes such as flood forecasting. The objective of this paper was to analyze, on a relatively large sample of flood events, how rainfall-runoff modeling using an event-based model can be sensitive to the use of spatial rainfall compared to mean areal rainfall over the watershed. This comparison was based not only on the model's efficiency in reproducing the flood events but also through the estimation of the initial conditions by the model, using different rainfall inputs. The initial conditions of soil moisture are indeed a key factor for flood modeling in the Mediterranean region. In order to provide a soil moisture index that could be related to the initial condition of the model, the soil moisture output of the Safran-Isba-Modcou (SIM model developed by Météo-France was used. This study was done in the Gardon catchment (545 km^{2} in South France, using uniform or spatial rainfall data derived from rain gauge and radar for 16 flood events. The event-based model considered combines the SCS runoff production model and the Lag and Route routing model. Results show that spatial rainfall increases the efficiency of the model. The advantage of using spatial rainfall is marked for some of the largest flood events. In addition, the relationship between the model's initial condition and the external predictor of soil moisture provided by the SIM model is better when using spatial rainfall, in particular when using spatial radar data with R^{2} values increasing from 0.61 to 0.72.
Li, Zheng; Wang, Hong; Yang, Danping
2017-10-01
We present a space-time fractional Allen-Cahn phase-field model that describes the transport of the fluid mixture of two immiscible fluid phases. The space and time fractional order parameters control the sharpness and the decay behavior of the interface via a seamless transition of the parameters. Although they are shown to provide more accurate description of anomalous diffusion processes and sharper interfaces than traditional integer-order phase-field models do, fractional models yield numerical methods with dense stiffness matrices. Consequently, the resulting numerical schemes have significantly increased computational work and memory requirement. We develop a lossless fast numerical method for the accurate and efficient numerical simulation of the space-time fractional phase-field model. Numerical experiments shows the utility of the fractional phase-field model and the corresponding fast numerical method.
Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele
2014-01-01
The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved. PMID:24516363
Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele
2014-01-01
The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.
SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results
Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas
2017-01-01
The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.
Improved light and temperature responses for light-use-efficiency-based GPP models
Directory of Open Access Journals (Sweden)
I. McCallum
2013-10-01
Full Text Available Gross primary production (GPP is the process by which carbon enters ecosystems. Models based on the theory of light use efficiency (LUE have emerged as an efficient method to estimate ecosystem GPP. However, problems have been noted when applying global parameterizations to biome-level applications. In particular, model–data comparisons of GPP have shown that models (including LUE models have difficulty matching estimated GPP. This is significant as errors in simulated GPP may propagate through models (e.g. Earth system models. Clearly, unique biome-level characteristics must be accounted for if model accuracy is to be improved. We hypothesize that in boreal regions (which are strongly temperature controlled, accounting for temperature acclimation and non-linear light response of daily GPP will improve model performance. To test this hypothesis, we have chosen four diagnostic models for comparison, namely an LUE model (linear in its light response both with and without temperature acclimation and an LUE model and a big leaf model both with temperature acclimation and non-linear in their light response. All models include environmental modifiers for temperature and vapour pressure deficit (VPD. Initially, all models were calibrated against five eddy covariance (EC sites within Russia for the years 2002–2005, for a total of 17 site years. Model evaluation was performed via 10-out cross-validation. Cross-validation clearly demonstrates the improvement in model performance that temperature acclimation makes in modelling GPP at strongly temperature-controlled sites in Russia. These results would indicate that inclusion of temperature acclimation in models on sites experiencing cold temperatures is imperative. Additionally, the inclusion of a non-linear light response function is shown to further improve performance, particularly in less temperature-controlled sites.
Health effects of home energy efficiency interventions in England: a modelling study.
Hamilton, Ian; Milner, James; Chalabi, Zaid; Das, Payel; Jones, Benjamin; Shrubsole, Clive; Davies, Mike; Wilkinson, Paul
2015-04-27
To assess potential public health impacts of changes to indoor air quality and temperature due to energy efficiency retrofits in English dwellings to meet 2030 carbon reduction targets. Health impact modelling study. England. English household population. Three retrofit scenarios were modelled: (1) fabric and ventilation retrofits installed assuming building regulations are met; (2) as with scenario (1) but with additional ventilation for homes at risk of poor ventilation; (3) as with scenario (1) but with no additional ventilation to illustrate the potential risk of weak regulations and non-compliance. Primary outcomes were changes in quality adjusted life years (QALYs) over 50 years from cardiorespiratory diseases, lung cancer, asthma and common mental disorders due to changes in indoor air pollutants, including secondhand tobacco smoke, PM2.5 from indoor and outdoor sources, radon, mould, and indoor winter temperatures. The modelling study estimates showed that scenario (1) resulted in positive effects on net mortality and morbidity of 2241 (95% credible intervals (CI) 2085 to 2397) QALYs per 10,000 persons over 50 years follow-up due to improved temperatures and reduced exposure to indoor pollutants, despite an increase in exposure to outdoor-generated particulate matter with a diameter of 2.5 μm or less (PM₂.₅). Scenario (2) resulted in a negative impact of -728 (95% CI -864 to -592) QALYs per 10,000 persons over 50 years due to an overall increase in indoor pollutant exposures. Scenario (3) resulted in -539 (95% CI -678 to -399) QALYs per 10,000 persons over 50 years follow-up due to an increase in indoor exposures despite the targeting of pollutants. If properly implemented alongside ventilation, energy efficiency retrofits in housing can improve health by reducing exposure to cold and air pollutants. Maximising the health benefits requires careful understanding of the balance of changes in pollutant exposures, highlighting the importance of
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Directory of Open Access Journals (Sweden)
Arun Arjunan
2015-08-01
Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.